Test Report: Docker_Linux_containerd_arm64 21409

                    
                      0aa34a444c66e47b3763835c9f1ccee8527d3e22:2025-09-04:41276
                    
                

Test fail (1/332)

Order failed test Duration
54 TestDockerEnvContainerd 52.51
x
+
TestDockerEnvContainerd (52.51s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-668100 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-668100 --driver=docker  --container-runtime=containerd: (35.002166027s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-668100"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-KW3V3hsx1aeO/agent.900266" SSH_AGENT_PID="900267" DOCKER_HOST=ssh://docker@127.0.0.1:33884 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-KW3V3hsx1aeO/agent.900266" SSH_AGENT_PID="900267" DOCKER_HOST=ssh://docker@127.0.0.1:33884 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-KW3V3hsx1aeO/agent.900266" SSH_AGENT_PID="900267" DOCKER_HOST=ssh://docker@127.0.0.1:33884 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.235158213s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-KW3V3hsx1aeO/agent.900266" SSH_AGENT_PID="900267" DOCKER_HOST=ssh://docker@127.0.0.1:33884 docker image ls"
docker_test.go:250: (dbg) Non-zero exit: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-KW3V3hsx1aeO/agent.900266" SSH_AGENT_PID="900267" DOCKER_HOST=ssh://docker@127.0.0.1:33884 docker image ls": exit status 1 (687.892702ms)

                                                
                                                
** stderr ** 
	error during connect: Get "http://docker.example.com/v1.43/images/json": EOF

                                                
                                                
** /stderr **
docker_test.go:252: failed to execute 'docker image ls', error: exit status 1, output: 
** stderr ** 
	error during connect: Get "http://docker.example.com/v1.43/images/json": EOF

                                                
                                                
** /stderr **
panic.go:636: *** TestDockerEnvContainerd FAILED at 2025-09-04 06:27:19.575725989 +0000 UTC m=+456.708784175
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestDockerEnvContainerd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestDockerEnvContainerd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect dockerenv-668100
helpers_test.go:243: (dbg) docker inspect dockerenv-668100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cb2217a70bc44049e2e79dd29e3abd49baeb31607570e3e082ed38bde4b94476",
	        "Created": "2025-09-04T06:26:36.49152314Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 897751,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-04T06:26:36.55667274Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:05a67d9d64bd61dbd33e828ddc4dedd9a0cf93c553e7627e8e0a3cfe0b4eba90",
	        "ResolvConfPath": "/var/lib/docker/containers/cb2217a70bc44049e2e79dd29e3abd49baeb31607570e3e082ed38bde4b94476/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cb2217a70bc44049e2e79dd29e3abd49baeb31607570e3e082ed38bde4b94476/hostname",
	        "HostsPath": "/var/lib/docker/containers/cb2217a70bc44049e2e79dd29e3abd49baeb31607570e3e082ed38bde4b94476/hosts",
	        "LogPath": "/var/lib/docker/containers/cb2217a70bc44049e2e79dd29e3abd49baeb31607570e3e082ed38bde4b94476/cb2217a70bc44049e2e79dd29e3abd49baeb31607570e3e082ed38bde4b94476-json.log",
	        "Name": "/dockerenv-668100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "dockerenv-668100:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "dockerenv-668100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cb2217a70bc44049e2e79dd29e3abd49baeb31607570e3e082ed38bde4b94476",
	                "LowerDir": "/var/lib/docker/overlay2/349b08f6c88dbd2af236739a3f937cdd3313bda9b12a3962648d87de8d11a13c-init/diff:/var/lib/docker/overlay2/fe768064e77edef7ab034159629a7675e982c755adb79a9cc21b6b108aaa3716/diff",
	                "MergedDir": "/var/lib/docker/overlay2/349b08f6c88dbd2af236739a3f937cdd3313bda9b12a3962648d87de8d11a13c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/349b08f6c88dbd2af236739a3f937cdd3313bda9b12a3962648d87de8d11a13c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/349b08f6c88dbd2af236739a3f937cdd3313bda9b12a3962648d87de8d11a13c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "dockerenv-668100",
	                "Source": "/var/lib/docker/volumes/dockerenv-668100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "dockerenv-668100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "dockerenv-668100",
	                "name.minikube.sigs.k8s.io": "dockerenv-668100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5e8a6213b869eb0be2c4efa1607d6fbc4c75486f7f239e50a5471e6dfef93040",
	            "SandboxKey": "/var/run/docker/netns/5e8a6213b869",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33884"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33885"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33888"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33886"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33887"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "dockerenv-668100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:fd:78:b4:54:9c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d906c13f66df78a703595b0dfeefcfe4ed591c611fcc42fba2db6dcf65f8b9d0",
	                    "EndpointID": "fea7e4ef1b5f79425a86d7be14bcc71c960b1de1539b3c50417cdab984e5dc82",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "dockerenv-668100",
	                        "cb2217a70bc4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p dockerenv-668100 -n dockerenv-668100
helpers_test.go:252: <<< TestDockerEnvContainerd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestDockerEnvContainerd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p dockerenv-668100 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p dockerenv-668100 logs -n 25: (1.258359851s)
helpers_test.go:260: TestDockerEnvContainerd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND   │                                                       ARGS                                                        │     PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ip         │ addons-903438 ip                                                                                                  │ addons-903438    │ jenkins │ v1.36.0 │ 04 Sep 25 06:24 UTC │ 04 Sep 25 06:24 UTC │
	│ addons     │ addons-903438 addons disable registry --alsologtostderr -v=1                                                      │ addons-903438    │ jenkins │ v1.36.0 │ 04 Sep 25 06:24 UTC │ 04 Sep 25 06:24 UTC │
	│ addons     │ addons-903438 addons disable nvidia-device-plugin --alsologtostderr -v=1                                          │ addons-903438    │ jenkins │ v1.36.0 │ 04 Sep 25 06:24 UTC │ 04 Sep 25 06:24 UTC │
	│ addons     │ addons-903438 addons disable cloud-spanner --alsologtostderr -v=1                                                 │ addons-903438    │ jenkins │ v1.36.0 │ 04 Sep 25 06:24 UTC │ 04 Sep 25 06:24 UTC │
	│ addons     │ enable headlamp -p addons-903438 --alsologtostderr -v=1                                                           │ addons-903438    │ jenkins │ v1.36.0 │ 04 Sep 25 06:24 UTC │ 04 Sep 25 06:24 UTC │
	│ ssh        │ addons-903438 ssh cat /opt/local-path-provisioner/pvc-6aa63eb0-ba24-46af-ab92-52e9a2ec4d21_default_test-pvc/file1 │ addons-903438    │ jenkins │ v1.36.0 │ 04 Sep 25 06:24 UTC │ 04 Sep 25 06:24 UTC │
	│ addons     │ addons-903438 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                   │ addons-903438    │ jenkins │ v1.36.0 │ 04 Sep 25 06:24 UTC │ 04 Sep 25 06:25 UTC │
	│ addons     │ addons-903438 addons disable headlamp --alsologtostderr -v=1                                                      │ addons-903438    │ jenkins │ v1.36.0 │ 04 Sep 25 06:25 UTC │ 04 Sep 25 06:25 UTC │
	│ addons     │ addons-903438 addons disable metrics-server --alsologtostderr -v=1                                                │ addons-903438    │ jenkins │ v1.36.0 │ 04 Sep 25 06:25 UTC │ 04 Sep 25 06:25 UTC │
	│ addons     │ addons-903438 addons disable inspektor-gadget --alsologtostderr -v=1                                              │ addons-903438    │ jenkins │ v1.36.0 │ 04 Sep 25 06:25 UTC │ 04 Sep 25 06:25 UTC │
	│ addons     │ addons-903438 addons disable volumesnapshots --alsologtostderr -v=1                                               │ addons-903438    │ jenkins │ v1.36.0 │ 04 Sep 25 06:25 UTC │ 04 Sep 25 06:25 UTC │
	│ addons     │ addons-903438 addons disable csi-hostpath-driver --alsologtostderr -v=1                                           │ addons-903438    │ jenkins │ v1.36.0 │ 04 Sep 25 06:25 UTC │ 04 Sep 25 06:26 UTC │
	│ addons     │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-903438                                    │ addons-903438    │ jenkins │ v1.36.0 │ 04 Sep 25 06:26 UTC │ 04 Sep 25 06:26 UTC │
	│ addons     │ addons-903438 addons disable registry-creds --alsologtostderr -v=1                                                │ addons-903438    │ jenkins │ v1.36.0 │ 04 Sep 25 06:26 UTC │ 04 Sep 25 06:26 UTC │
	│ ssh        │ addons-903438 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                          │ addons-903438    │ jenkins │ v1.36.0 │ 04 Sep 25 06:26 UTC │ 04 Sep 25 06:26 UTC │
	│ ip         │ addons-903438 ip                                                                                                  │ addons-903438    │ jenkins │ v1.36.0 │ 04 Sep 25 06:26 UTC │ 04 Sep 25 06:26 UTC │
	│ addons     │ addons-903438 addons disable ingress-dns --alsologtostderr -v=1                                                   │ addons-903438    │ jenkins │ v1.36.0 │ 04 Sep 25 06:26 UTC │ 04 Sep 25 06:26 UTC │
	│ addons     │ addons-903438 addons disable ingress --alsologtostderr -v=1                                                       │ addons-903438    │ jenkins │ v1.36.0 │ 04 Sep 25 06:26 UTC │ 04 Sep 25 06:26 UTC │
	│ stop       │ -p addons-903438                                                                                                  │ addons-903438    │ jenkins │ v1.36.0 │ 04 Sep 25 06:26 UTC │ 04 Sep 25 06:26 UTC │
	│ addons     │ enable dashboard -p addons-903438                                                                                 │ addons-903438    │ jenkins │ v1.36.0 │ 04 Sep 25 06:26 UTC │ 04 Sep 25 06:26 UTC │
	│ addons     │ disable dashboard -p addons-903438                                                                                │ addons-903438    │ jenkins │ v1.36.0 │ 04 Sep 25 06:26 UTC │ 04 Sep 25 06:26 UTC │
	│ addons     │ disable gvisor -p addons-903438                                                                                   │ addons-903438    │ jenkins │ v1.36.0 │ 04 Sep 25 06:26 UTC │ 04 Sep 25 06:26 UTC │
	│ delete     │ -p addons-903438                                                                                                  │ addons-903438    │ jenkins │ v1.36.0 │ 04 Sep 25 06:26 UTC │ 04 Sep 25 06:26 UTC │
	│ start      │ -p dockerenv-668100 --driver=docker  --container-runtime=containerd                                               │ dockerenv-668100 │ jenkins │ v1.36.0 │ 04 Sep 25 06:26 UTC │ 04 Sep 25 06:27 UTC │
	│ docker-env │ --ssh-host --ssh-add -p dockerenv-668100                                                                          │ dockerenv-668100 │ jenkins │ v1.36.0 │ 04 Sep 25 06:27 UTC │ 04 Sep 25 06:27 UTC │
	└────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 06:26:31
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 06:26:31.230757  897361 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:26:31.230878  897361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:26:31.230882  897361 out.go:374] Setting ErrFile to fd 2...
	I0904 06:26:31.230885  897361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:26:31.231148  897361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-875589/.minikube/bin
	I0904 06:26:31.231548  897361 out.go:368] Setting JSON to false
	I0904 06:26:31.232328  897361 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14941,"bootTime":1756952251,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0904 06:26:31.232390  897361 start.go:140] virtualization:  
	I0904 06:26:31.236890  897361 out.go:179] * [dockerenv-668100] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0904 06:26:31.241721  897361 notify.go:220] Checking for updates...
	I0904 06:26:31.245835  897361 out.go:179]   - MINIKUBE_LOCATION=21409
	I0904 06:26:31.249329  897361 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 06:26:31.252540  897361 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-875589/kubeconfig
	I0904 06:26:31.255670  897361 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-875589/.minikube
	I0904 06:26:31.258796  897361 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0904 06:26:31.261927  897361 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 06:26:31.265245  897361 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 06:26:31.290462  897361 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 06:26:31.290565  897361 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:26:31.354096  897361 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-09-04 06:26:31.345035869 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0904 06:26:31.354191  897361 docker.go:318] overlay module found
	I0904 06:26:31.357459  897361 out.go:179] * Using the docker driver based on user configuration
	I0904 06:26:31.360266  897361 start.go:304] selected driver: docker
	I0904 06:26:31.360272  897361 start.go:918] validating driver "docker" against <nil>
	I0904 06:26:31.360284  897361 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 06:26:31.360399  897361 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:26:31.413690  897361 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-09-04 06:26:31.404143301 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0904 06:26:31.413832  897361 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0904 06:26:31.414107  897361 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0904 06:26:31.414258  897361 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0904 06:26:31.417349  897361 out.go:179] * Using Docker driver with root privileges
	I0904 06:26:31.420240  897361 cni.go:84] Creating CNI manager for ""
	I0904 06:26:31.420304  897361 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0904 06:26:31.420311  897361 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0904 06:26:31.420385  897361 start.go:348] cluster config:
	{Name:dockerenv-668100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:dockerenv-668100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 06:26:31.423562  897361 out.go:179] * Starting "dockerenv-668100" primary control-plane node in "dockerenv-668100" cluster
	I0904 06:26:31.426343  897361 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0904 06:26:31.429262  897361 out.go:179] * Pulling base image v0.0.47-1756936034-21409 ...
	I0904 06:26:31.432082  897361 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0904 06:26:31.432137  897361 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21409-875589/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4
	I0904 06:26:31.432176  897361 cache.go:58] Caching tarball of preloaded images
	I0904 06:26:31.432180  897361 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local docker daemon
	I0904 06:26:31.432309  897361 preload.go:172] Found /home/jenkins/minikube-integration/21409-875589/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 06:26:31.432320  897361 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0904 06:26:31.432654  897361 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/dockerenv-668100/config.json ...
	I0904 06:26:31.432674  897361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/dockerenv-668100/config.json: {Name:mk48e1434515eebf26f473c4887134b4f912e69f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:26:31.455767  897361 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local docker daemon, skipping pull
	I0904 06:26:31.455779  897361 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc exists in daemon, skipping load
	I0904 06:26:31.455799  897361 cache.go:232] Successfully downloaded all kic artifacts
	I0904 06:26:31.455831  897361 start.go:360] acquireMachinesLock for dockerenv-668100: {Name:mkdf493a7becb458700087979914aeee71b2a19f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 06:26:31.456580  897361 start.go:364] duration metric: took 733.213µs to acquireMachinesLock for "dockerenv-668100"
	I0904 06:26:31.456611  897361 start.go:93] Provisioning new machine with config: &{Name:dockerenv-668100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:dockerenv-668100 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0904 06:26:31.456679  897361 start.go:125] createHost starting for "" (driver="docker")
	I0904 06:26:31.460032  897361 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0904 06:26:31.460289  897361 start.go:159] libmachine.API.Create for "dockerenv-668100" (driver="docker")
	I0904 06:26:31.460318  897361 client.go:168] LocalClient.Create starting
	I0904 06:26:31.460383  897361 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-875589/.minikube/certs/ca.pem
	I0904 06:26:31.460422  897361 main.go:141] libmachine: Decoding PEM data...
	I0904 06:26:31.460433  897361 main.go:141] libmachine: Parsing certificate...
	I0904 06:26:31.460488  897361 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-875589/.minikube/certs/cert.pem
	I0904 06:26:31.460507  897361 main.go:141] libmachine: Decoding PEM data...
	I0904 06:26:31.460515  897361 main.go:141] libmachine: Parsing certificate...
	I0904 06:26:31.460876  897361 cli_runner.go:164] Run: docker network inspect dockerenv-668100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0904 06:26:31.476481  897361 cli_runner.go:211] docker network inspect dockerenv-668100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0904 06:26:31.476550  897361 network_create.go:284] running [docker network inspect dockerenv-668100] to gather additional debugging logs...
	I0904 06:26:31.476565  897361 cli_runner.go:164] Run: docker network inspect dockerenv-668100
	W0904 06:26:31.492369  897361 cli_runner.go:211] docker network inspect dockerenv-668100 returned with exit code 1
	I0904 06:26:31.492388  897361 network_create.go:287] error running [docker network inspect dockerenv-668100]: docker network inspect dockerenv-668100: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network dockerenv-668100 not found
	I0904 06:26:31.492399  897361 network_create.go:289] output of [docker network inspect dockerenv-668100]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network dockerenv-668100 not found
	
	** /stderr **
	I0904 06:26:31.492521  897361 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 06:26:31.513524  897361 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018c2a80}
	I0904 06:26:31.513554  897361 network_create.go:124] attempt to create docker network dockerenv-668100 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0904 06:26:31.513604  897361 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=dockerenv-668100 dockerenv-668100
	I0904 06:26:31.570278  897361 network_create.go:108] docker network dockerenv-668100 192.168.49.0/24 created
	I0904 06:26:31.570299  897361 kic.go:121] calculated static IP "192.168.49.2" for the "dockerenv-668100" container
	I0904 06:26:31.570386  897361 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0904 06:26:31.586217  897361 cli_runner.go:164] Run: docker volume create dockerenv-668100 --label name.minikube.sigs.k8s.io=dockerenv-668100 --label created_by.minikube.sigs.k8s.io=true
	I0904 06:26:31.605345  897361 oci.go:103] Successfully created a docker volume dockerenv-668100
	I0904 06:26:31.605433  897361 cli_runner.go:164] Run: docker run --rm --name dockerenv-668100-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-668100 --entrypoint /usr/bin/test -v dockerenv-668100:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc -d /var/lib
	I0904 06:26:32.163599  897361 oci.go:107] Successfully prepared a docker volume dockerenv-668100
	I0904 06:26:32.163642  897361 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0904 06:26:32.163660  897361 kic.go:194] Starting extracting preloaded images to volume ...
	I0904 06:26:32.163737  897361 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-875589/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v dockerenv-668100:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc -I lz4 -xf /preloaded.tar -C /extractDir
	I0904 06:26:36.421304  897361 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-875589/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v dockerenv-668100:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc -I lz4 -xf /preloaded.tar -C /extractDir: (4.257532147s)
	I0904 06:26:36.421338  897361 kic.go:203] duration metric: took 4.257660412s to extract preloaded images to volume ...
	W0904 06:26:36.421741  897361 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0904 06:26:36.421841  897361 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0904 06:26:36.476385  897361 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname dockerenv-668100 --name dockerenv-668100 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-668100 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=dockerenv-668100 --network dockerenv-668100 --ip 192.168.49.2 --volume dockerenv-668100:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc
	I0904 06:26:36.760431  897361 cli_runner.go:164] Run: docker container inspect dockerenv-668100 --format={{.State.Running}}
	I0904 06:26:36.780129  897361 cli_runner.go:164] Run: docker container inspect dockerenv-668100 --format={{.State.Status}}
	I0904 06:26:36.803587  897361 cli_runner.go:164] Run: docker exec dockerenv-668100 stat /var/lib/dpkg/alternatives/iptables
	I0904 06:26:36.861437  897361 oci.go:144] the created container "dockerenv-668100" has a running status.
	I0904 06:26:36.861463  897361 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-875589/.minikube/machines/dockerenv-668100/id_rsa...
	I0904 06:26:36.985120  897361 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-875589/.minikube/machines/dockerenv-668100/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0904 06:26:37.014108  897361 cli_runner.go:164] Run: docker container inspect dockerenv-668100 --format={{.State.Status}}
	I0904 06:26:37.035668  897361 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0904 06:26:37.035679  897361 kic_runner.go:114] Args: [docker exec --privileged dockerenv-668100 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0904 06:26:37.094797  897361 cli_runner.go:164] Run: docker container inspect dockerenv-668100 --format={{.State.Status}}
	I0904 06:26:37.125369  897361 machine.go:93] provisionDockerMachine start ...
	I0904 06:26:37.125469  897361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-668100
	I0904 06:26:37.154766  897361 main.go:141] libmachine: Using SSH client type: native
	I0904 06:26:37.155102  897361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33884 <nil> <nil>}
	I0904 06:26:37.155108  897361 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 06:26:37.155733  897361 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49616->127.0.0.1:33884: read: connection reset by peer
	I0904 06:26:40.284347  897361 main.go:141] libmachine: SSH cmd err, output: <nil>: dockerenv-668100
	
	I0904 06:26:40.284361  897361 ubuntu.go:182] provisioning hostname "dockerenv-668100"
	I0904 06:26:40.284423  897361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-668100
	I0904 06:26:40.301621  897361 main.go:141] libmachine: Using SSH client type: native
	I0904 06:26:40.301930  897361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33884 <nil> <nil>}
	I0904 06:26:40.301939  897361 main.go:141] libmachine: About to run SSH command:
	sudo hostname dockerenv-668100 && echo "dockerenv-668100" | sudo tee /etc/hostname
	I0904 06:26:40.437210  897361 main.go:141] libmachine: SSH cmd err, output: <nil>: dockerenv-668100
	
	I0904 06:26:40.437291  897361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-668100
	I0904 06:26:40.455069  897361 main.go:141] libmachine: Using SSH client type: native
	I0904 06:26:40.455365  897361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33884 <nil> <nil>}
	I0904 06:26:40.455379  897361 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdockerenv-668100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 dockerenv-668100/g' /etc/hosts;
				else 
					echo '127.0.1.1 dockerenv-668100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 06:26:40.581244  897361 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 06:26:40.581262  897361 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-875589/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-875589/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-875589/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-875589/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-875589/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-875589/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-875589/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-875589/.minikube}
	I0904 06:26:40.581278  897361 ubuntu.go:190] setting up certificates
	I0904 06:26:40.581286  897361 provision.go:84] configureAuth start
	I0904 06:26:40.581358  897361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-668100
	I0904 06:26:40.598354  897361 provision.go:143] copyHostCerts
	I0904 06:26:40.598416  897361 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-875589/.minikube/cert.pem, removing ...
	I0904 06:26:40.598424  897361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-875589/.minikube/cert.pem
	I0904 06:26:40.598510  897361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-875589/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-875589/.minikube/cert.pem (1123 bytes)
	I0904 06:26:40.598609  897361 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-875589/.minikube/key.pem, removing ...
	I0904 06:26:40.598614  897361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-875589/.minikube/key.pem
	I0904 06:26:40.598639  897361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-875589/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-875589/.minikube/key.pem (1675 bytes)
	I0904 06:26:40.598697  897361 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-875589/.minikube/ca.pem, removing ...
	I0904 06:26:40.598705  897361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-875589/.minikube/ca.pem
	I0904 06:26:40.598727  897361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-875589/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-875589/.minikube/ca.pem (1082 bytes)
	I0904 06:26:40.598778  897361 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-875589/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-875589/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-875589/.minikube/certs/ca-key.pem org=jenkins.dockerenv-668100 san=[127.0.0.1 192.168.49.2 dockerenv-668100 localhost minikube]
	I0904 06:26:41.829759  897361 provision.go:177] copyRemoteCerts
	I0904 06:26:41.829811  897361 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 06:26:41.829855  897361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-668100
	I0904 06:26:41.847108  897361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33884 SSHKeyPath:/home/jenkins/minikube-integration/21409-875589/.minikube/machines/dockerenv-668100/id_rsa Username:docker}
	I0904 06:26:41.938363  897361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-875589/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 06:26:41.964272  897361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-875589/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0904 06:26:41.989918  897361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-875589/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0904 06:26:42.017156  897361 provision.go:87] duration metric: took 1.435854653s to configureAuth
	I0904 06:26:42.017176  897361 ubuntu.go:206] setting minikube options for container-runtime
	I0904 06:26:42.017406  897361 config.go:182] Loaded profile config "dockerenv-668100": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0904 06:26:42.017412  897361 machine.go:96] duration metric: took 4.8920323s to provisionDockerMachine
	I0904 06:26:42.017418  897361 client.go:171] duration metric: took 10.557096934s to LocalClient.Create
	I0904 06:26:42.017445  897361 start.go:167] duration metric: took 10.557153961s to libmachine.API.Create "dockerenv-668100"
	I0904 06:26:42.017453  897361 start.go:293] postStartSetup for "dockerenv-668100" (driver="docker")
	I0904 06:26:42.017463  897361 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 06:26:42.017517  897361 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 06:26:42.017556  897361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-668100
	I0904 06:26:42.037008  897361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33884 SSHKeyPath:/home/jenkins/minikube-integration/21409-875589/.minikube/machines/dockerenv-668100/id_rsa Username:docker}
	I0904 06:26:42.137368  897361 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 06:26:42.142876  897361 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0904 06:26:42.142903  897361 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0904 06:26:42.142912  897361 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0904 06:26:42.142919  897361 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0904 06:26:42.142930  897361 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-875589/.minikube/addons for local assets ...
	I0904 06:26:42.143007  897361 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-875589/.minikube/files for local assets ...
	I0904 06:26:42.143030  897361 start.go:296] duration metric: took 125.571406ms for postStartSetup
	I0904 06:26:42.143418  897361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-668100
	I0904 06:26:42.165089  897361 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/dockerenv-668100/config.json ...
	I0904 06:26:42.165414  897361 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 06:26:42.165466  897361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-668100
	I0904 06:26:42.186768  897361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33884 SSHKeyPath:/home/jenkins/minikube-integration/21409-875589/.minikube/machines/dockerenv-668100/id_rsa Username:docker}
	I0904 06:26:42.283420  897361 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0904 06:26:42.288625  897361 start.go:128] duration metric: took 10.831925351s to createHost
	I0904 06:26:42.288641  897361 start.go:83] releasing machines lock for "dockerenv-668100", held for 10.83205163s
	I0904 06:26:42.288723  897361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-668100
	I0904 06:26:42.306964  897361 ssh_runner.go:195] Run: cat /version.json
	I0904 06:26:42.307002  897361 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 06:26:42.307018  897361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-668100
	I0904 06:26:42.307052  897361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-668100
	I0904 06:26:42.328479  897361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33884 SSHKeyPath:/home/jenkins/minikube-integration/21409-875589/.minikube/machines/dockerenv-668100/id_rsa Username:docker}
	I0904 06:26:42.337216  897361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33884 SSHKeyPath:/home/jenkins/minikube-integration/21409-875589/.minikube/machines/dockerenv-668100/id_rsa Username:docker}
	I0904 06:26:42.547480  897361 ssh_runner.go:195] Run: systemctl --version
	I0904 06:26:42.555320  897361 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0904 06:26:42.559983  897361 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0904 06:26:42.586188  897361 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0904 06:26:42.586258  897361 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 06:26:42.620239  897361 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0904 06:26:42.620258  897361 start.go:495] detecting cgroup driver to use...
	I0904 06:26:42.620293  897361 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0904 06:26:42.620341  897361 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0904 06:26:42.633339  897361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0904 06:26:42.645359  897361 docker.go:218] disabling cri-docker service (if available) ...
	I0904 06:26:42.645419  897361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 06:26:42.660212  897361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 06:26:42.675541  897361 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 06:26:42.767809  897361 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 06:26:42.868714  897361 docker.go:234] disabling docker service ...
	I0904 06:26:42.868774  897361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 06:26:42.891527  897361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 06:26:42.903111  897361 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 06:26:43.001747  897361 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 06:26:43.095711  897361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 06:26:43.107567  897361 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 06:26:43.124135  897361 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0904 06:26:43.134515  897361 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0904 06:26:43.144730  897361 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0904 06:26:43.144793  897361 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0904 06:26:43.155001  897361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0904 06:26:43.165063  897361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0904 06:26:43.175267  897361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0904 06:26:43.185646  897361 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 06:26:43.195018  897361 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0904 06:26:43.204862  897361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0904 06:26:43.214684  897361 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0904 06:26:43.224932  897361 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 06:26:43.233933  897361 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 06:26:43.242394  897361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 06:26:43.329493  897361 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0904 06:26:43.450262  897361 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0904 06:26:43.450322  897361 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0904 06:26:43.454107  897361 start.go:563] Will wait 60s for crictl version
	I0904 06:26:43.454161  897361 ssh_runner.go:195] Run: which crictl
	I0904 06:26:43.457679  897361 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 06:26:43.495066  897361 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0904 06:26:43.495122  897361 ssh_runner.go:195] Run: containerd --version
	I0904 06:26:43.521088  897361 ssh_runner.go:195] Run: containerd --version
	I0904 06:26:43.550487  897361 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0904 06:26:43.553382  897361 cli_runner.go:164] Run: docker network inspect dockerenv-668100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 06:26:43.570634  897361 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0904 06:26:43.574300  897361 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 06:26:43.589775  897361 kubeadm.go:875] updating cluster {Name:dockerenv-668100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:dockerenv-668100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 06:26:43.589899  897361 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0904 06:26:43.589958  897361 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 06:26:43.625415  897361 containerd.go:627] all images are preloaded for containerd runtime.
	I0904 06:26:43.625427  897361 containerd.go:534] Images already preloaded, skipping extraction
	I0904 06:26:43.625486  897361 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 06:26:43.661914  897361 containerd.go:627] all images are preloaded for containerd runtime.
	I0904 06:26:43.661927  897361 cache_images.go:85] Images are preloaded, skipping loading
	I0904 06:26:43.661935  897361 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 containerd true true} ...
	I0904 06:26:43.662042  897361 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=dockerenv-668100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:dockerenv-668100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 06:26:43.662113  897361 ssh_runner.go:195] Run: sudo crictl info
	I0904 06:26:43.705445  897361 cni.go:84] Creating CNI manager for ""
	I0904 06:26:43.705457  897361 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0904 06:26:43.705465  897361 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 06:26:43.705488  897361 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:dockerenv-668100 NodeName:dockerenv-668100 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 06:26:43.705598  897361 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "dockerenv-668100"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 06:26:43.705669  897361 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0904 06:26:43.715192  897361 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 06:26:43.715256  897361 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 06:26:43.724103  897361 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0904 06:26:43.742347  897361 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 06:26:43.761142  897361 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I0904 06:26:43.779574  897361 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0904 06:26:43.782934  897361 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 06:26:43.793900  897361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 06:26:43.874301  897361 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 06:26:43.889806  897361 certs.go:68] Setting up /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/dockerenv-668100 for IP: 192.168.49.2
	I0904 06:26:43.889817  897361 certs.go:194] generating shared ca certs ...
	I0904 06:26:43.889832  897361 certs.go:226] acquiring lock for ca certs: {Name:mk68a829d29b2e2571b1ce9f16db9b9845de8f29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:26:43.890000  897361 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-875589/.minikube/ca.key
	I0904 06:26:43.890040  897361 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-875589/.minikube/proxy-client-ca.key
	I0904 06:26:43.890046  897361 certs.go:256] generating profile certs ...
	I0904 06:26:43.890107  897361 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/dockerenv-668100/client.key
	I0904 06:26:43.890116  897361 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/dockerenv-668100/client.crt with IP's: []
	I0904 06:26:44.250887  897361 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/dockerenv-668100/client.crt ...
	I0904 06:26:44.250904  897361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/dockerenv-668100/client.crt: {Name:mkb2455c9be18712fac8627ef07a6ef9d239fcda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:26:44.251697  897361 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/dockerenv-668100/client.key ...
	I0904 06:26:44.251707  897361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/dockerenv-668100/client.key: {Name:mk929132be16f1cb47c792cef4f456c3eda9634b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:26:44.252331  897361 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/dockerenv-668100/apiserver.key.cfa3fe7d
	I0904 06:26:44.252344  897361 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/dockerenv-668100/apiserver.crt.cfa3fe7d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0904 06:26:44.526309  897361 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/dockerenv-668100/apiserver.crt.cfa3fe7d ...
	I0904 06:26:44.526325  897361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/dockerenv-668100/apiserver.crt.cfa3fe7d: {Name:mk38d93f669355f5ab7dc5b0e5061eb71d233206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:26:44.527009  897361 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/dockerenv-668100/apiserver.key.cfa3fe7d ...
	I0904 06:26:44.527019  897361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/dockerenv-668100/apiserver.key.cfa3fe7d: {Name:mk64945c2c916f96625609fb69cf02a40980c3f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:26:44.527639  897361 certs.go:381] copying /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/dockerenv-668100/apiserver.crt.cfa3fe7d -> /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/dockerenv-668100/apiserver.crt
	I0904 06:26:44.527726  897361 certs.go:385] copying /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/dockerenv-668100/apiserver.key.cfa3fe7d -> /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/dockerenv-668100/apiserver.key
	I0904 06:26:44.527783  897361 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/dockerenv-668100/proxy-client.key
	I0904 06:26:44.527795  897361 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/dockerenv-668100/proxy-client.crt with IP's: []
	I0904 06:26:45.486956  897361 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/dockerenv-668100/proxy-client.crt ...
	I0904 06:26:45.486985  897361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/dockerenv-668100/proxy-client.crt: {Name:mk01c576cd517d50d1b844c99b7312ec0c289593 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:26:45.487196  897361 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/dockerenv-668100/proxy-client.key ...
	I0904 06:26:45.487205  897361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/dockerenv-668100/proxy-client.key: {Name:mk7e871a26697ff498b395cf069fe229de4029b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:26:45.487946  897361 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-875589/.minikube/certs/ca-key.pem (1675 bytes)
	I0904 06:26:45.487985  897361 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-875589/.minikube/certs/ca.pem (1082 bytes)
	I0904 06:26:45.488011  897361 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-875589/.minikube/certs/cert.pem (1123 bytes)
	I0904 06:26:45.488034  897361 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-875589/.minikube/certs/key.pem (1675 bytes)
	I0904 06:26:45.488588  897361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-875589/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 06:26:45.515211  897361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-875589/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0904 06:26:45.541671  897361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-875589/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 06:26:45.567478  897361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-875589/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0904 06:26:45.592627  897361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/dockerenv-668100/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0904 06:26:45.618497  897361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/dockerenv-668100/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0904 06:26:45.644780  897361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/dockerenv-668100/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 06:26:45.669792  897361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/dockerenv-668100/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0904 06:26:45.694876  897361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-875589/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 06:26:45.720065  897361 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 06:26:45.738353  897361 ssh_runner.go:195] Run: openssl version
	I0904 06:26:45.747135  897361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 06:26:45.757760  897361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 06:26:45.761279  897361 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 06:20 /usr/share/ca-certificates/minikubeCA.pem
	I0904 06:26:45.761337  897361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 06:26:45.769178  897361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 06:26:45.779026  897361 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 06:26:45.782583  897361 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0904 06:26:45.782620  897361 kubeadm.go:392] StartCluster: {Name:dockerenv-668100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:dockerenv-668100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 06:26:45.782677  897361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0904 06:26:45.782733  897361 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 06:26:45.827401  897361 cri.go:89] found id: ""
	I0904 06:26:45.827464  897361 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 06:26:45.836588  897361 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0904 06:26:45.845681  897361 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0904 06:26:45.845751  897361 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0904 06:26:45.855018  897361 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0904 06:26:45.855027  897361 kubeadm.go:157] found existing configuration files:
	
	I0904 06:26:45.855081  897361 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0904 06:26:45.864065  897361 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0904 06:26:45.864119  897361 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0904 06:26:45.872724  897361 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0904 06:26:45.882122  897361 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0904 06:26:45.882181  897361 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0904 06:26:45.890900  897361 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0904 06:26:45.899747  897361 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0904 06:26:45.899812  897361 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0904 06:26:45.908746  897361 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0904 06:26:45.917671  897361 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0904 06:26:45.917726  897361 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0904 06:26:45.926119  897361 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0904 06:26:45.986712  897361 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0904 06:26:45.986936  897361 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0904 06:26:46.066404  897361 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0904 06:27:04.307353  897361 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0904 06:27:04.307403  897361 kubeadm.go:310] [preflight] Running pre-flight checks
	I0904 06:27:04.307491  897361 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0904 06:27:04.307546  897361 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1084-aws
	I0904 06:27:04.307580  897361 kubeadm.go:310] OS: Linux
	I0904 06:27:04.307625  897361 kubeadm.go:310] CGROUPS_CPU: enabled
	I0904 06:27:04.307673  897361 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0904 06:27:04.307720  897361 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0904 06:27:04.307767  897361 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0904 06:27:04.307815  897361 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0904 06:27:04.307863  897361 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0904 06:27:04.307908  897361 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0904 06:27:04.307956  897361 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0904 06:27:04.308002  897361 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0904 06:27:04.308075  897361 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0904 06:27:04.308169  897361 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0904 06:27:04.308267  897361 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0904 06:27:04.308330  897361 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0904 06:27:04.311308  897361 out.go:252]   - Generating certificates and keys ...
	I0904 06:27:04.311411  897361 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0904 06:27:04.311476  897361 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0904 06:27:04.311548  897361 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0904 06:27:04.311605  897361 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0904 06:27:04.311665  897361 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0904 06:27:04.311715  897361 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0904 06:27:04.311781  897361 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0904 06:27:04.311919  897361 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [dockerenv-668100 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0904 06:27:04.311977  897361 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0904 06:27:04.312123  897361 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [dockerenv-668100 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0904 06:27:04.312204  897361 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0904 06:27:04.312272  897361 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0904 06:27:04.312348  897361 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0904 06:27:04.312404  897361 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0904 06:27:04.312455  897361 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0904 06:27:04.312512  897361 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0904 06:27:04.312565  897361 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0904 06:27:04.312637  897361 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0904 06:27:04.312700  897361 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0904 06:27:04.312787  897361 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0904 06:27:04.312882  897361 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0904 06:27:04.317923  897361 out.go:252]   - Booting up control plane ...
	I0904 06:27:04.318036  897361 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0904 06:27:04.318113  897361 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0904 06:27:04.318179  897361 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0904 06:27:04.318288  897361 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0904 06:27:04.318382  897361 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0904 06:27:04.318486  897361 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0904 06:27:04.318570  897361 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0904 06:27:04.318608  897361 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0904 06:27:04.318739  897361 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0904 06:27:04.318844  897361 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0904 06:27:04.318901  897361 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.78182ms
	I0904 06:27:04.318993  897361 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0904 06:27:04.319073  897361 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0904 06:27:04.319167  897361 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0904 06:27:04.319245  897361 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0904 06:27:04.319321  897361 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 3.889290454s
	I0904 06:27:04.319390  897361 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 6.452711964s
	I0904 06:27:04.319457  897361 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 7.003703728s
	I0904 06:27:04.319563  897361 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0904 06:27:04.319688  897361 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0904 06:27:04.319752  897361 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0904 06:27:04.319937  897361 kubeadm.go:310] [mark-control-plane] Marking the node dockerenv-668100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0904 06:27:04.319993  897361 kubeadm.go:310] [bootstrap-token] Using token: jgzjt9.0m09xs3lmsari5xi
	I0904 06:27:04.322898  897361 out.go:252]   - Configuring RBAC rules ...
	I0904 06:27:04.323020  897361 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0904 06:27:04.323125  897361 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0904 06:27:04.323274  897361 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0904 06:27:04.323403  897361 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0904 06:27:04.323561  897361 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0904 06:27:04.323654  897361 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0904 06:27:04.323781  897361 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0904 06:27:04.323828  897361 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0904 06:27:04.323873  897361 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0904 06:27:04.323876  897361 kubeadm.go:310] 
	I0904 06:27:04.323940  897361 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0904 06:27:04.323944  897361 kubeadm.go:310] 
	I0904 06:27:04.324048  897361 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0904 06:27:04.324053  897361 kubeadm.go:310] 
	I0904 06:27:04.324080  897361 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0904 06:27:04.324138  897361 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0904 06:27:04.324188  897361 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0904 06:27:04.324192  897361 kubeadm.go:310] 
	I0904 06:27:04.324245  897361 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0904 06:27:04.324248  897361 kubeadm.go:310] 
	I0904 06:27:04.324297  897361 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0904 06:27:04.324301  897361 kubeadm.go:310] 
	I0904 06:27:04.324356  897361 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0904 06:27:04.324471  897361 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0904 06:27:04.324567  897361 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0904 06:27:04.324575  897361 kubeadm.go:310] 
	I0904 06:27:04.324681  897361 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0904 06:27:04.324772  897361 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0904 06:27:04.324775  897361 kubeadm.go:310] 
	I0904 06:27:04.324878  897361 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jgzjt9.0m09xs3lmsari5xi \
	I0904 06:27:04.324992  897361 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b015c4983f19e3224c1e79ee70ccbcf131b704362e85d3e278f8097e427041d1 \
	I0904 06:27:04.325024  897361 kubeadm.go:310] 	--control-plane 
	I0904 06:27:04.325028  897361 kubeadm.go:310] 
	I0904 06:27:04.325159  897361 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0904 06:27:04.325166  897361 kubeadm.go:310] 
	I0904 06:27:04.325250  897361 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jgzjt9.0m09xs3lmsari5xi \
	I0904 06:27:04.325370  897361 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b015c4983f19e3224c1e79ee70ccbcf131b704362e85d3e278f8097e427041d1 
	I0904 06:27:04.325378  897361 cni.go:84] Creating CNI manager for ""
	I0904 06:27:04.325384  897361 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0904 06:27:04.330302  897361 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0904 06:27:04.333211  897361 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0904 06:27:04.337445  897361 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0904 06:27:04.337455  897361 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0904 06:27:04.357935  897361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0904 06:27:04.671308  897361 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0904 06:27:04.671390  897361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 06:27:04.671454  897361 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes dockerenv-668100 minikube.k8s.io/updated_at=2025_09_04T06_27_04_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=c3fa37de45a2901b215fab008201edf72ce5a1ff minikube.k8s.io/name=dockerenv-668100 minikube.k8s.io/primary=true
	I0904 06:27:04.914708  897361 ops.go:34] apiserver oom_adj: -16
	I0904 06:27:04.914741  897361 kubeadm.go:1105] duration metric: took 243.420241ms to wait for elevateKubeSystemPrivileges
	I0904 06:27:04.914753  897361 kubeadm.go:394] duration metric: took 19.132136419s to StartCluster
	I0904 06:27:04.914769  897361 settings.go:142] acquiring lock: {Name:mk9c58582abe05a5564762391c515fb51268bf5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:27:04.914829  897361 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-875589/kubeconfig
	I0904 06:27:04.915468  897361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-875589/kubeconfig: {Name:mk31755a028adb6a990e615720c4f523c928982d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:27:04.915696  897361 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0904 06:27:04.915824  897361 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0904 06:27:04.916068  897361 config.go:182] Loaded profile config "dockerenv-668100": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0904 06:27:04.916103  897361 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0904 06:27:04.916162  897361 addons.go:69] Setting storage-provisioner=true in profile "dockerenv-668100"
	I0904 06:27:04.916174  897361 addons.go:238] Setting addon storage-provisioner=true in "dockerenv-668100"
	I0904 06:27:04.916196  897361 host.go:66] Checking if "dockerenv-668100" exists ...
	I0904 06:27:04.916682  897361 cli_runner.go:164] Run: docker container inspect dockerenv-668100 --format={{.State.Status}}
	I0904 06:27:04.917001  897361 addons.go:69] Setting default-storageclass=true in profile "dockerenv-668100"
	I0904 06:27:04.917012  897361 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "dockerenv-668100"
	I0904 06:27:04.917324  897361 cli_runner.go:164] Run: docker container inspect dockerenv-668100 --format={{.State.Status}}
	I0904 06:27:04.919140  897361 out.go:179] * Verifying Kubernetes components...
	I0904 06:27:04.925156  897361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 06:27:04.966336  897361 addons.go:238] Setting addon default-storageclass=true in "dockerenv-668100"
	I0904 06:27:04.966366  897361 host.go:66] Checking if "dockerenv-668100" exists ...
	I0904 06:27:04.966798  897361 cli_runner.go:164] Run: docker container inspect dockerenv-668100 --format={{.State.Status}}
	I0904 06:27:04.972988  897361 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 06:27:04.975810  897361 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 06:27:04.975822  897361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0904 06:27:04.975891  897361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-668100
	I0904 06:27:04.995066  897361 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0904 06:27:04.995078  897361 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0904 06:27:04.995145  897361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-668100
	I0904 06:27:05.009425  897361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33884 SSHKeyPath:/home/jenkins/minikube-integration/21409-875589/.minikube/machines/dockerenv-668100/id_rsa Username:docker}
	I0904 06:27:05.031243  897361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33884 SSHKeyPath:/home/jenkins/minikube-integration/21409-875589/.minikube/machines/dockerenv-668100/id_rsa Username:docker}
	I0904 06:27:05.194402  897361 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0904 06:27:05.194515  897361 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 06:27:05.232777  897361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 06:27:05.273940  897361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0904 06:27:05.593356  897361 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0904 06:27:05.596120  897361 api_server.go:52] waiting for apiserver process to appear ...
	I0904 06:27:05.596187  897361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 06:27:05.802353  897361 api_server.go:72] duration metric: took 886.627031ms to wait for apiserver process to appear ...
	I0904 06:27:05.802364  897361 api_server.go:88] waiting for apiserver healthz status ...
	I0904 06:27:05.802382  897361 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 06:27:05.814705  897361 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0904 06:27:05.816237  897361 api_server.go:141] control plane version: v1.34.0
	I0904 06:27:05.816252  897361 api_server.go:131] duration metric: took 13.883215ms to wait for apiserver health ...
	I0904 06:27:05.816260  897361 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 06:27:05.819666  897361 system_pods.go:59] 5 kube-system pods found
	I0904 06:27:05.819686  897361 system_pods.go:61] "etcd-dockerenv-668100" [b855a968-c0d4-4905-96a9-fc3dff0c4682] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 06:27:05.819693  897361 system_pods.go:61] "kube-apiserver-dockerenv-668100" [db48beda-0b2c-4fa5-976d-dea79c59d0f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 06:27:05.819701  897361 system_pods.go:61] "kube-controller-manager-dockerenv-668100" [3d4f5233-1095-4ec3-a9ca-45c82bc77424] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 06:27:05.819707  897361 system_pods.go:61] "kube-scheduler-dockerenv-668100" [69b85745-80a0-4751-9a04-1e8986ba5b28] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 06:27:05.819712  897361 system_pods.go:61] "storage-provisioner" [71c4cc9e-d8f4-4c61-811d-34ef62cc03d3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0904 06:27:05.819717  897361 system_pods.go:74] duration metric: took 3.452541ms to wait for pod list to return data ...
	I0904 06:27:05.819727  897361 kubeadm.go:578] duration metric: took 904.007071ms to wait for: map[apiserver:true system_pods:true]
	I0904 06:27:05.819739  897361 node_conditions.go:102] verifying NodePressure condition ...
	I0904 06:27:05.820843  897361 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0904 06:27:05.823407  897361 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0904 06:27:05.823427  897361 node_conditions.go:123] node cpu capacity is 2
	I0904 06:27:05.823438  897361 node_conditions.go:105] duration metric: took 3.695539ms to run NodePressure ...
	I0904 06:27:05.823450  897361 start.go:241] waiting for startup goroutines ...
	I0904 06:27:05.823971  897361 addons.go:514] duration metric: took 907.857312ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0904 06:27:06.097823  897361 kapi.go:214] "coredns" deployment in "kube-system" namespace and "dockerenv-668100" context rescaled to 1 replicas
	I0904 06:27:06.097853  897361 start.go:246] waiting for cluster config update ...
	I0904 06:27:06.097863  897361 start.go:255] writing updated cluster config ...
	I0904 06:27:06.098173  897361 ssh_runner.go:195] Run: rm -f paused
	I0904 06:27:06.163017  897361 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0904 06:27:06.166248  897361 out.go:179] * Done! kubectl is now configured to use "dockerenv-668100" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	94e77ae694a38       ba04bb24b9575       9 seconds ago       Running             storage-provisioner       0                   727511983fdb1       storage-provisioner
	380b173f1f0b6       6fc32d66c1411       10 seconds ago      Running             kube-proxy                0                   71068b21dcc6a       kube-proxy-cnl9w
	882fe50ff0640       b1a8c6f707935       10 seconds ago      Running             kindnet-cni               0                   5c635449e6a8b       kindnet-gktdj
	1cd1d9f20ef8a       996be7e86d9b3       24 seconds ago      Running             kube-controller-manager   0                   f3141d4ebe6f5       kube-controller-manager-dockerenv-668100
	6ae62be481cbe       a25f5ef9c34c3       24 seconds ago      Running             kube-scheduler            0                   32c2b1cb71e25       kube-scheduler-dockerenv-668100
	7d5b83c3116ee       a1894772a478e       24 seconds ago      Running             etcd                      0                   0a96949b99cab       etcd-dockerenv-668100
	1ee90978e53f4       d291939e99406       24 seconds ago      Running             kube-apiserver            0                   0618a2426e275       kube-apiserver-dockerenv-668100
	
	
	==> containerd <==
	Sep 04 06:26:56 dockerenv-668100 containerd[833]: time="2025-09-04T06:26:56.677974657Z" level=info msg="StartContainer for \"1cd1d9f20ef8ade227088850e8a4b70283e0db55a19acab86c1d990dfd1c836e\" returns successfully"
	Sep 04 06:27:09 dockerenv-668100 containerd[833]: time="2025-09-04T06:27:09.966772873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sjph7,Uid:f5428f91-007a-40ef-aea7-9ed4ae42021f,Namespace:kube-system,Attempt:0,}"
	Sep 04 06:27:10 dockerenv-668100 containerd[833]: time="2025-09-04T06:27:10.011434256Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sjph7,Uid:f5428f91-007a-40ef-aea7-9ed4ae42021f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d3be208230571470bdd12d3803e95a1a23265b4f0ad5e94e586c896d4631f633\": failed to find network info for sandbox \"d3be208230571470bdd12d3803e95a1a23265b4f0ad5e94e586c896d4631f633\""
	Sep 04 06:27:10 dockerenv-668100 containerd[833]: time="2025-09-04T06:27:10.270098699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-gktdj,Uid:4858c187-b6a5-4f05-bbe2-c1e23c6af60c,Namespace:kube-system,Attempt:0,}"
	Sep 04 06:27:10 dockerenv-668100 containerd[833]: time="2025-09-04T06:27:10.285660717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cnl9w,Uid:2c703103-b24a-4f1e-83c0-e70b47e465ae,Namespace:kube-system,Attempt:0,}"
	Sep 04 06:27:10 dockerenv-668100 containerd[833]: time="2025-09-04T06:27:10.367809457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-gktdj,Uid:4858c187-b6a5-4f05-bbe2-c1e23c6af60c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c635449e6a8b11b98dcde8e4814ca82e8f66b4382ea62f343a985e7ed33020d\""
	Sep 04 06:27:10 dockerenv-668100 containerd[833]: time="2025-09-04T06:27:10.378783360Z" level=info msg="CreateContainer within sandbox \"5c635449e6a8b11b98dcde8e4814ca82e8f66b4382ea62f343a985e7ed33020d\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Sep 04 06:27:10 dockerenv-668100 containerd[833]: time="2025-09-04T06:27:10.383953195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cnl9w,Uid:2c703103-b24a-4f1e-83c0-e70b47e465ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"71068b21dcc6a0baddb2537720334153b1cb6845f04513fd94a56827a0ff578b\""
	Sep 04 06:27:10 dockerenv-668100 containerd[833]: time="2025-09-04T06:27:10.391589390Z" level=info msg="CreateContainer within sandbox \"71068b21dcc6a0baddb2537720334153b1cb6845f04513fd94a56827a0ff578b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Sep 04 06:27:10 dockerenv-668100 containerd[833]: time="2025-09-04T06:27:10.407678144Z" level=info msg="CreateContainer within sandbox \"5c635449e6a8b11b98dcde8e4814ca82e8f66b4382ea62f343a985e7ed33020d\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"882fe50ff0640fcc6be702ebd8fb9bb219cb6ed580178c5642230626d1843c44\""
	Sep 04 06:27:10 dockerenv-668100 containerd[833]: time="2025-09-04T06:27:10.408815154Z" level=info msg="StartContainer for \"882fe50ff0640fcc6be702ebd8fb9bb219cb6ed580178c5642230626d1843c44\""
	Sep 04 06:27:10 dockerenv-668100 containerd[833]: time="2025-09-04T06:27:10.416964127Z" level=info msg="CreateContainer within sandbox \"71068b21dcc6a0baddb2537720334153b1cb6845f04513fd94a56827a0ff578b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"380b173f1f0b66abb94a52bc25f3037e00c21670f7fb2a304fcc5d5751ffae9a\""
	Sep 04 06:27:10 dockerenv-668100 containerd[833]: time="2025-09-04T06:27:10.419772581Z" level=info msg="StartContainer for \"380b173f1f0b66abb94a52bc25f3037e00c21670f7fb2a304fcc5d5751ffae9a\""
	Sep 04 06:27:10 dockerenv-668100 containerd[833]: time="2025-09-04T06:27:10.501482479Z" level=info msg="StartContainer for \"380b173f1f0b66abb94a52bc25f3037e00c21670f7fb2a304fcc5d5751ffae9a\" returns successfully"
	Sep 04 06:27:10 dockerenv-668100 containerd[833]: time="2025-09-04T06:27:10.559088191Z" level=info msg="StartContainer for \"882fe50ff0640fcc6be702ebd8fb9bb219cb6ed580178c5642230626d1843c44\" returns successfully"
	Sep 04 06:27:10 dockerenv-668100 containerd[833]: time="2025-09-04T06:27:10.572953018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:71c4cc9e-d8f4-4c61-811d-34ef62cc03d3,Namespace:kube-system,Attempt:0,}"
	Sep 04 06:27:10 dockerenv-668100 containerd[833]: time="2025-09-04T06:27:10.656558503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:71c4cc9e-d8f4-4c61-811d-34ef62cc03d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"727511983fdb14ea4fae2029944cf5d99c53d70bfe20ef210296e8f6d4fafe46\""
	Sep 04 06:27:10 dockerenv-668100 containerd[833]: time="2025-09-04T06:27:10.666421291Z" level=info msg="CreateContainer within sandbox \"727511983fdb14ea4fae2029944cf5d99c53d70bfe20ef210296e8f6d4fafe46\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Sep 04 06:27:10 dockerenv-668100 containerd[833]: time="2025-09-04T06:27:10.685648465Z" level=info msg="CreateContainer within sandbox \"727511983fdb14ea4fae2029944cf5d99c53d70bfe20ef210296e8f6d4fafe46\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"94e77ae694a386fd3c954fbe0ec6379751ed420a5f54c632274658230e01f8cf\""
	Sep 04 06:27:10 dockerenv-668100 containerd[833]: time="2025-09-04T06:27:10.686579326Z" level=info msg="StartContainer for \"94e77ae694a386fd3c954fbe0ec6379751ed420a5f54c632274658230e01f8cf\""
	Sep 04 06:27:10 dockerenv-668100 containerd[833]: time="2025-09-04T06:27:10.777440897Z" level=info msg="StartContainer for \"94e77ae694a386fd3c954fbe0ec6379751ed420a5f54c632274658230e01f8cf\" returns successfully"
	Sep 04 06:27:14 dockerenv-668100 containerd[833]: time="2025-09-04T06:27:14.036312198Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Sep 04 06:27:18 dockerenv-668100 containerd[833]: time="2025-09-04T06:27:18.607431694Z" level=info msg="ImageCreate event name:\"docker.io/local/minikube-dockerenv-containerd-test:latest\""
	Sep 04 06:27:18 dockerenv-668100 containerd[833]: time="2025-09-04T06:27:18.621722382Z" level=info msg="ImageCreate event name:\"sha256:55c49c8f4b2e09dc0d5b37eb589b5abf71303b9bbbc3859a5459b25fa9fd0b57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 04 06:27:18 dockerenv-668100 containerd[833]: time="2025-09-04T06:27:18.622130749Z" level=info msg="ImageUpdate event name:\"docker.io/local/minikube-dockerenv-containerd-test:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	Name:               dockerenv-668100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=dockerenv-668100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3fa37de45a2901b215fab008201edf72ce5a1ff
	                    minikube.k8s.io/name=dockerenv-668100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_04T06_27_04_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Sep 2025 06:27:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  dockerenv-668100
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Sep 2025 06:27:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Sep 2025 06:27:14 +0000   Thu, 04 Sep 2025 06:26:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Sep 2025 06:27:14 +0000   Thu, 04 Sep 2025 06:26:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Sep 2025 06:27:14 +0000   Thu, 04 Sep 2025 06:26:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Sep 2025 06:27:14 +0000   Thu, 04 Sep 2025 06:27:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    dockerenv-668100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 4b5e832d2bd74b8fba9e9819d9c1d589
	  System UUID:                0207422c-002e-4426-abc7-5742cab1bdba
	  Boot ID:                    73e95979-5845-4235-a957-c3d9397ed2ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-sjph7                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11s
	  kube-system                 etcd-dockerenv-668100                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17s
	  kube-system                 kindnet-gktdj                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11s
	  kube-system                 kube-apiserver-dockerenv-668100             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 kube-controller-manager-dockerenv-668100    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 kube-proxy-cnl9w                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 kube-scheduler-dockerenv-668100             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 9s    kube-proxy       
	  Normal   Starting                 17s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 17s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  17s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  17s   kubelet          Node dockerenv-668100 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17s   kubelet          Node dockerenv-668100 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17s   kubelet          Node dockerenv-668100 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12s   node-controller  Node dockerenv-668100 event: Registered Node dockerenv-668100 in Controller
	
	
	==> dmesg <==
	[Sep 4 05:18] kauditd_printk_skb: 8 callbacks suppressed
	[Sep 4 05:30] 9pnet: p9_fd_create_tcp (626527): problem connecting socket to 192.168.49.1
	[Sep 4 06:19] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [7d5b83c3116ee734dc27be4059c60fbef025a6b40879d237176b7e5b5df2d798] <==
	{"level":"warn","ts":"2025-09-04T06:26:58.737449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:26:58.750091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:26:58.778496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:26:58.795249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:26:58.807605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:26:58.831641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:26:58.852328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:26:58.869327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:26:58.882697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:26:58.939299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:26:58.955012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:26:58.966306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:26:58.982251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:26:59.012322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:26:59.041827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:26:59.074982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:26:59.105990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:26:59.120982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:26:59.157311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:26:59.204925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:26:59.277626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:26:59.281013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:26:59.305103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:26:59.326005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:26:59.391794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50002","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 06:27:20 up  4:09,  0 users,  load average: 2.29, 3.01, 3.35
	Linux dockerenv-668100 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [882fe50ff0640fcc6be702ebd8fb9bb219cb6ed580178c5642230626d1843c44] <==
	I0904 06:27:10.740381       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0904 06:27:10.740636       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0904 06:27:10.740822       1 main.go:148] setting mtu 1500 for CNI 
	I0904 06:27:10.740836       1 main.go:178] kindnetd IP family: "ipv4"
	I0904 06:27:10.740847       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-04T06:27:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0904 06:27:10.941169       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0904 06:27:10.941252       1 controller.go:381] "Waiting for informer caches to sync"
	I0904 06:27:10.941283       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0904 06:27:10.942214       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [1ee90978e53f4c922480a75ed1d74f67e1eefed0bf85dec4d6b473a4d2c0135b] <==
	I0904 06:27:00.662030       1 cache.go:39] Caches are synced for autoregister controller
	I0904 06:27:00.668903       1 controller.go:667] quota admission added evaluator for: namespaces
	E0904 06:27:00.676342       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0904 06:27:00.698423       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I0904 06:27:00.702226       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0904 06:27:00.725914       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0904 06:27:00.729397       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I0904 06:27:00.882090       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0904 06:27:01.193804       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0904 06:27:01.203942       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0904 06:27:01.204140       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0904 06:27:02.299671       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0904 06:27:02.379772       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0904 06:27:02.515913       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0904 06:27:02.525119       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0904 06:27:02.526603       1 controller.go:667] quota admission added evaluator for: endpoints
	I0904 06:27:02.532597       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0904 06:27:03.434265       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0904 06:27:03.730312       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0904 06:27:03.747991       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0904 06:27:03.761594       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0904 06:27:09.092456       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0904 06:27:09.097450       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0904 06:27:09.337794       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0904 06:27:09.445404       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [1cd1d9f20ef8ade227088850e8a4b70283e0db55a19acab86c1d990dfd1c836e] <==
	I0904 06:27:08.434011       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0904 06:27:08.433743       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0904 06:27:08.433835       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0904 06:27:08.435209       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0904 06:27:08.435599       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0904 06:27:08.436522       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0904 06:27:08.439566       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0904 06:27:08.439750       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0904 06:27:08.448257       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0904 06:27:08.451909       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0904 06:27:08.461190       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0904 06:27:08.464763       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0904 06:27:08.467576       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0904 06:27:08.474547       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0904 06:27:08.480807       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0904 06:27:08.480990       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0904 06:27:08.482051       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0904 06:27:08.483778       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0904 06:27:08.484106       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0904 06:27:08.484784       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0904 06:27:08.485116       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0904 06:27:08.485231       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0904 06:27:08.485547       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0904 06:27:08.488470       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0904 06:27:08.490990       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [380b173f1f0b66abb94a52bc25f3037e00c21670f7fb2a304fcc5d5751ffae9a] <==
	I0904 06:27:10.534203       1 server_linux.go:53] "Using iptables proxy"
	I0904 06:27:10.618919       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0904 06:27:10.725005       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0904 06:27:10.725051       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0904 06:27:10.725126       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 06:27:10.800234       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0904 06:27:10.800302       1 server_linux.go:132] "Using iptables Proxier"
	I0904 06:27:10.804169       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 06:27:10.804548       1 server.go:527] "Version info" version="v1.34.0"
	I0904 06:27:10.804574       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 06:27:10.806502       1 config.go:200] "Starting service config controller"
	I0904 06:27:10.806941       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0904 06:27:10.807215       1 config.go:106] "Starting endpoint slice config controller"
	I0904 06:27:10.807306       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0904 06:27:10.807404       1 config.go:403] "Starting serviceCIDR config controller"
	I0904 06:27:10.807482       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0904 06:27:10.815400       1 config.go:309] "Starting node config controller"
	I0904 06:27:10.815475       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0904 06:27:10.819239       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0904 06:27:10.907917       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0904 06:27:10.907937       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0904 06:27:10.908132       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6ae62be481cbea4d22cfc09d04c38605d9e3ea025928c5358b8797c6fe76c42c] <==
	I0904 06:27:01.282793       1 serving.go:386] Generated self-signed cert in-memory
	W0904 06:27:02.280597       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0904 06:27:02.281377       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0904 06:27:02.281534       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0904 06:27:02.281628       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0904 06:27:02.314069       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0904 06:27:02.314105       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 06:27:02.316135       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 06:27:02.316342       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 06:27:02.316561       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0904 06:27:02.316653       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0904 06:27:02.326118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I0904 06:27:03.816510       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 04 06:27:09 dockerenv-668100 kubelet[1540]: I0904 06:27:09.381023    1540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4858c187-b6a5-4f05-bbe2-c1e23c6af60c-xtables-lock\") pod \"kindnet-gktdj\" (UID: \"4858c187-b6a5-4f05-bbe2-c1e23c6af60c\") " pod="kube-system/kindnet-gktdj"
	Sep 04 06:27:09 dockerenv-668100 kubelet[1540]: I0904 06:27:09.381286    1540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c703103-b24a-4f1e-83c0-e70b47e465ae-xtables-lock\") pod \"kube-proxy-cnl9w\" (UID: \"2c703103-b24a-4f1e-83c0-e70b47e465ae\") " pod="kube-system/kube-proxy-cnl9w"
	Sep 04 06:27:09 dockerenv-668100 kubelet[1540]: I0904 06:27:09.381375    1540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4858c187-b6a5-4f05-bbe2-c1e23c6af60c-lib-modules\") pod \"kindnet-gktdj\" (UID: \"4858c187-b6a5-4f05-bbe2-c1e23c6af60c\") " pod="kube-system/kindnet-gktdj"
	Sep 04 06:27:09 dockerenv-668100 kubelet[1540]: I0904 06:27:09.381452    1540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2c703103-b24a-4f1e-83c0-e70b47e465ae-kube-proxy\") pod \"kube-proxy-cnl9w\" (UID: \"2c703103-b24a-4f1e-83c0-e70b47e465ae\") " pod="kube-system/kube-proxy-cnl9w"
	Sep 04 06:27:09 dockerenv-668100 kubelet[1540]: I0904 06:27:09.381530    1540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jncd4\" (UniqueName: \"kubernetes.io/projected/2c703103-b24a-4f1e-83c0-e70b47e465ae-kube-api-access-jncd4\") pod \"kube-proxy-cnl9w\" (UID: \"2c703103-b24a-4f1e-83c0-e70b47e465ae\") " pod="kube-system/kube-proxy-cnl9w"
	Sep 04 06:27:09 dockerenv-668100 kubelet[1540]: I0904 06:27:09.381636    1540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4858c187-b6a5-4f05-bbe2-c1e23c6af60c-cni-cfg\") pod \"kindnet-gktdj\" (UID: \"4858c187-b6a5-4f05-bbe2-c1e23c6af60c\") " pod="kube-system/kindnet-gktdj"
	Sep 04 06:27:09 dockerenv-668100 kubelet[1540]: I0904 06:27:09.381714    1540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq4tk\" (UniqueName: \"kubernetes.io/projected/4858c187-b6a5-4f05-bbe2-c1e23c6af60c-kube-api-access-tq4tk\") pod \"kindnet-gktdj\" (UID: \"4858c187-b6a5-4f05-bbe2-c1e23c6af60c\") " pod="kube-system/kindnet-gktdj"
	Sep 04 06:27:09 dockerenv-668100 kubelet[1540]: I0904 06:27:09.381802    1540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c703103-b24a-4f1e-83c0-e70b47e465ae-lib-modules\") pod \"kube-proxy-cnl9w\" (UID: \"2c703103-b24a-4f1e-83c0-e70b47e465ae\") " pod="kube-system/kube-proxy-cnl9w"
	Sep 04 06:27:09 dockerenv-668100 kubelet[1540]: E0904 06:27:09.495717    1540 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 04 06:27:09 dockerenv-668100 kubelet[1540]: E0904 06:27:09.495758    1540 projected.go:196] Error preparing data for projected volume kube-api-access-tq4tk for pod kube-system/kindnet-gktdj: configmap "kube-root-ca.crt" not found
	Sep 04 06:27:09 dockerenv-668100 kubelet[1540]: E0904 06:27:09.495820    1540 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4858c187-b6a5-4f05-bbe2-c1e23c6af60c-kube-api-access-tq4tk podName:4858c187-b6a5-4f05-bbe2-c1e23c6af60c nodeName:}" failed. No retries permitted until 2025-09-04 06:27:09.995799878 +0000 UTC m=+6.499909084 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tq4tk" (UniqueName: "kubernetes.io/projected/4858c187-b6a5-4f05-bbe2-c1e23c6af60c-kube-api-access-tq4tk") pod "kindnet-gktdj" (UID: "4858c187-b6a5-4f05-bbe2-c1e23c6af60c") : configmap "kube-root-ca.crt" not found
	Sep 04 06:27:09 dockerenv-668100 kubelet[1540]: E0904 06:27:09.497948    1540 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 04 06:27:09 dockerenv-668100 kubelet[1540]: E0904 06:27:09.497990    1540 projected.go:196] Error preparing data for projected volume kube-api-access-jncd4 for pod kube-system/kube-proxy-cnl9w: configmap "kube-root-ca.crt" not found
	Sep 04 06:27:09 dockerenv-668100 kubelet[1540]: E0904 06:27:09.498060    1540 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c703103-b24a-4f1e-83c0-e70b47e465ae-kube-api-access-jncd4 podName:2c703103-b24a-4f1e-83c0-e70b47e465ae nodeName:}" failed. No retries permitted until 2025-09-04 06:27:09.998038943 +0000 UTC m=+6.502148150 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jncd4" (UniqueName: "kubernetes.io/projected/2c703103-b24a-4f1e-83c0-e70b47e465ae-kube-api-access-jncd4") pod "kube-proxy-cnl9w" (UID: "2c703103-b24a-4f1e-83c0-e70b47e465ae") : configmap "kube-root-ca.crt" not found
	Sep 04 06:27:09 dockerenv-668100 kubelet[1540]: I0904 06:27:09.786519    1540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5428f91-007a-40ef-aea7-9ed4ae42021f-config-volume\") pod \"coredns-66bc5c9577-sjph7\" (UID: \"f5428f91-007a-40ef-aea7-9ed4ae42021f\") " pod="kube-system/coredns-66bc5c9577-sjph7"
	Sep 04 06:27:09 dockerenv-668100 kubelet[1540]: I0904 06:27:09.787041    1540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n6sb\" (UniqueName: \"kubernetes.io/projected/f5428f91-007a-40ef-aea7-9ed4ae42021f-kube-api-access-5n6sb\") pod \"coredns-66bc5c9577-sjph7\" (UID: \"f5428f91-007a-40ef-aea7-9ed4ae42021f\") " pod="kube-system/coredns-66bc5c9577-sjph7"
	Sep 04 06:27:09 dockerenv-668100 kubelet[1540]: I0904 06:27:09.897464    1540 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 04 06:27:10 dockerenv-668100 kubelet[1540]: E0904 06:27:10.011779    1540 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3be208230571470bdd12d3803e95a1a23265b4f0ad5e94e586c896d4631f633\": failed to find network info for sandbox \"d3be208230571470bdd12d3803e95a1a23265b4f0ad5e94e586c896d4631f633\""
	Sep 04 06:27:10 dockerenv-668100 kubelet[1540]: E0904 06:27:10.011885    1540 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3be208230571470bdd12d3803e95a1a23265b4f0ad5e94e586c896d4631f633\": failed to find network info for sandbox \"d3be208230571470bdd12d3803e95a1a23265b4f0ad5e94e586c896d4631f633\"" pod="kube-system/coredns-66bc5c9577-sjph7"
	Sep 04 06:27:10 dockerenv-668100 kubelet[1540]: E0904 06:27:10.011908    1540 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3be208230571470bdd12d3803e95a1a23265b4f0ad5e94e586c896d4631f633\": failed to find network info for sandbox \"d3be208230571470bdd12d3803e95a1a23265b4f0ad5e94e586c896d4631f633\"" pod="kube-system/coredns-66bc5c9577-sjph7"
	Sep 04 06:27:10 dockerenv-668100 kubelet[1540]: E0904 06:27:10.011968    1540 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-sjph7_kube-system(f5428f91-007a-40ef-aea7-9ed4ae42021f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-sjph7_kube-system(f5428f91-007a-40ef-aea7-9ed4ae42021f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3be208230571470bdd12d3803e95a1a23265b4f0ad5e94e586c896d4631f633\\\": failed to find network info for sandbox \\\"d3be208230571470bdd12d3803e95a1a23265b4f0ad5e94e586c896d4631f633\\\"\"" pod="kube-system/coredns-66bc5c9577-sjph7" podUID="f5428f91-007a-40ef-aea7-9ed4ae42021f"
	Sep 04 06:27:10 dockerenv-668100 kubelet[1540]: I0904 06:27:10.764132    1540 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cnl9w" podStartSLOduration=1.7641143270000001 podStartE2EDuration="1.764114327s" podCreationTimestamp="2025-09-04 06:27:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 06:27:10.76399357 +0000 UTC m=+7.268102777" watchObservedRunningTime="2025-09-04 06:27:10.764114327 +0000 UTC m=+7.268223533"
	Sep 04 06:27:11 dockerenv-668100 kubelet[1540]: I0904 06:27:11.762252    1540 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-gktdj" podStartSLOduration=2.762232479 podStartE2EDuration="2.762232479s" podCreationTimestamp="2025-09-04 06:27:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 06:27:10.790653136 +0000 UTC m=+7.294762342" watchObservedRunningTime="2025-09-04 06:27:11.762232479 +0000 UTC m=+8.266341694"
	Sep 04 06:27:14 dockerenv-668100 kubelet[1540]: I0904 06:27:14.035175    1540 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 04 06:27:14 dockerenv-668100 kubelet[1540]: I0904 06:27:14.036560    1540 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	
	
	==> storage-provisioner [94e77ae694a386fd3c954fbe0ec6379751ed420a5f54c632274658230e01f8cf] <==
	I0904 06:27:10.787740       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p dockerenv-668100 -n dockerenv-668100
helpers_test.go:269: (dbg) Run:  kubectl --context dockerenv-668100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-sjph7
helpers_test.go:282: ======> post-mortem[TestDockerEnvContainerd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context dockerenv-668100 describe pod coredns-66bc5c9577-sjph7
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context dockerenv-668100 describe pod coredns-66bc5c9577-sjph7: exit status 1 (96.469023ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-sjph7" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context dockerenv-668100 describe pod coredns-66bc5c9577-sjph7: exit status 1
helpers_test.go:175: Cleaning up "dockerenv-668100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-668100
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-668100: (1.946916615s)
--- FAIL: TestDockerEnvContainerd (52.51s)

                                                
                                    

Test pass (301/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.97
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.25
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.0/json-events 5.15
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.09
18 TestDownloadOnly/v1.34.0/DeleteAll 0.21
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 210.15
29 TestAddons/serial/Volcano 40.34
31 TestAddons/serial/GCPAuth/Namespaces 0.22
32 TestAddons/serial/GCPAuth/FakeCredentials 10.86
35 TestAddons/parallel/Registry 17.8
36 TestAddons/parallel/RegistryCreds 0.86
37 TestAddons/parallel/Ingress 21.02
38 TestAddons/parallel/InspektorGadget 6.33
39 TestAddons/parallel/MetricsServer 6.19
41 TestAddons/parallel/CSI 51.04
42 TestAddons/parallel/Headlamp 17.88
43 TestAddons/parallel/CloudSpanner 6.64
44 TestAddons/parallel/LocalPath 53.71
45 TestAddons/parallel/NvidiaDevicePlugin 6.89
46 TestAddons/parallel/Yakd 11.91
48 TestAddons/StoppedEnableDisable 12.36
49 TestCertOptions 37.59
50 TestCertExpiration 232.79
52 TestForceSystemdFlag 42.98
53 TestForceSystemdEnv 42.8
59 TestErrorSpam/setup 33.14
60 TestErrorSpam/start 0.74
61 TestErrorSpam/status 1.12
62 TestErrorSpam/pause 2.05
63 TestErrorSpam/unpause 1.96
64 TestErrorSpam/stop 12.23
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 92.42
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 6.59
71 TestFunctional/serial/KubeContext 0.07
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 4.39
76 TestFunctional/serial/CacheCmd/cache/add_local 1.28
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
80 TestFunctional/serial/CacheCmd/cache/cache_reload 2
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.15
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
84 TestFunctional/serial/ExtraConfig 44.3
85 TestFunctional/serial/ComponentHealth 0.1
86 TestFunctional/serial/LogsCmd 1.81
87 TestFunctional/serial/LogsFileCmd 1.79
88 TestFunctional/serial/InvalidService 5.4
90 TestFunctional/parallel/ConfigCmd 0.65
91 TestFunctional/parallel/DashboardCmd 8.32
92 TestFunctional/parallel/DryRun 0.61
93 TestFunctional/parallel/InternationalLanguage 0.27
94 TestFunctional/parallel/StatusCmd 1.05
98 TestFunctional/parallel/ServiceCmdConnect 8.59
99 TestFunctional/parallel/AddonsCmd 0.15
100 TestFunctional/parallel/PersistentVolumeClaim 25.3
102 TestFunctional/parallel/SSHCmd 0.53
103 TestFunctional/parallel/CpCmd 2.14
105 TestFunctional/parallel/FileSync 0.41
106 TestFunctional/parallel/CertSync 2.27
110 TestFunctional/parallel/NodeLabels 0.11
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.65
114 TestFunctional/parallel/License 0.34
115 TestFunctional/parallel/Version/short 0.1
116 TestFunctional/parallel/Version/components 1.59
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.34
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
121 TestFunctional/parallel/ImageCommands/ImageBuild 4.67
122 TestFunctional/parallel/ImageCommands/Setup 0.72
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.26
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.5
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.43
128 TestFunctional/parallel/ServiceCmd/DeployApp 8.28
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.41
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.37
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.62
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
135 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.5
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.32
139 TestFunctional/parallel/ServiceCmd/List 0.33
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.34
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
142 TestFunctional/parallel/ServiceCmd/Format 0.39
143 TestFunctional/parallel/ServiceCmd/URL 0.42
144 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
145 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
149 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
150 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
151 TestFunctional/parallel/ProfileCmd/profile_list 0.42
152 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
153 TestFunctional/parallel/MountCmd/any-port 8.2
154 TestFunctional/parallel/MountCmd/specific-port 1.69
155 TestFunctional/parallel/MountCmd/VerifyCleanup 1.98
156 TestFunctional/delete_echo-server_images 0.05
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 120.03
164 TestMultiControlPlane/serial/DeployApp 22.97
165 TestMultiControlPlane/serial/PingHostFromPods 1.58
166 TestMultiControlPlane/serial/AddWorkerNode 16.91
167 TestMultiControlPlane/serial/NodeLabels 0.18
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.32
169 TestMultiControlPlane/serial/CopyFile 19.98
170 TestMultiControlPlane/serial/StopSecondaryNode 12.86
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.79
172 TestMultiControlPlane/serial/RestartSecondaryNode 13.47
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.31
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 97.92
175 TestMultiControlPlane/serial/DeleteSecondaryNode 10.74
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.74
177 TestMultiControlPlane/serial/StopCluster 36.02
178 TestMultiControlPlane/serial/RestartCluster 68.29
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.76
180 TestMultiControlPlane/serial/AddSecondaryNode 39.93
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.18
185 TestJSONOutput/start/Command 93.7
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.77
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.68
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 1.28
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.25
210 TestKicCustomNetwork/create_custom_network 36.8
211 TestKicCustomNetwork/use_default_bridge_network 34.44
212 TestKicExistingNetwork 32.84
213 TestKicCustomSubnet 36.18
214 TestKicStaticIP 32.73
215 TestMainNoArgs 0.07
216 TestMinikubeProfile 73.47
219 TestMountStart/serial/StartWithMountFirst 8.86
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 9.04
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.62
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.21
226 TestMountStart/serial/RestartStopped 7.53
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 105.73
231 TestMultiNode/serial/DeployApp2Nodes 20.31
232 TestMultiNode/serial/PingHostFrom2Pods 1.03
233 TestMultiNode/serial/AddNode 13.26
234 TestMultiNode/serial/MultiNodeLabels 0.13
235 TestMultiNode/serial/ProfileList 0.68
236 TestMultiNode/serial/CopyFile 10.14
237 TestMultiNode/serial/StopNode 2.26
238 TestMultiNode/serial/StartAfterStop 7.84
239 TestMultiNode/serial/RestartKeepsNodes 79.4
240 TestMultiNode/serial/DeleteNode 5.6
241 TestMultiNode/serial/StopMultiNode 24.02
242 TestMultiNode/serial/RestartMultiNode 50.44
243 TestMultiNode/serial/ValidateNameConflict 35.54
248 TestPreload 141.89
250 TestScheduledStopUnix 110.02
253 TestInsufficientStorage 9.79
254 TestRunningBinaryUpgrade 63.77
256 TestKubernetesUpgrade 350.01
257 TestMissingContainerUpgrade 141.77
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
260 TestNoKubernetes/serial/StartWithK8s 40.99
261 TestNoKubernetes/serial/StartWithStopK8s 18.35
262 TestNoKubernetes/serial/Start 7.29
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.38
264 TestNoKubernetes/serial/ProfileList 0.67
265 TestNoKubernetes/serial/Stop 1.21
266 TestNoKubernetes/serial/StartNoArgs 6.77
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
268 TestStoppedBinaryUpgrade/Setup 0.64
269 TestStoppedBinaryUpgrade/Upgrade 57.02
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.5
279 TestPause/serial/Start 53.99
280 TestPause/serial/SecondStartNoReconfiguration 7.27
281 TestPause/serial/Pause 0.73
282 TestPause/serial/VerifyStatus 0.33
283 TestPause/serial/Unpause 0.74
284 TestPause/serial/PauseAgain 1.12
285 TestPause/serial/DeletePaused 2.87
286 TestPause/serial/VerifyDeletedResources 14.87
294 TestNetworkPlugins/group/false 6
299 TestStartStop/group/old-k8s-version/serial/FirstStart 71.14
300 TestStartStop/group/old-k8s-version/serial/DeployApp 9.43
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.21
302 TestStartStop/group/old-k8s-version/serial/Stop 12.11
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
304 TestStartStop/group/old-k8s-version/serial/SecondStart 49.42
305 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
306 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.11
307 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
308 TestStartStop/group/old-k8s-version/serial/Pause 3.21
310 TestStartStop/group/no-preload/serial/FirstStart 88.09
312 TestStartStop/group/embed-certs/serial/FirstStart 67.59
313 TestStartStop/group/embed-certs/serial/DeployApp 9.37
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.23
315 TestStartStop/group/embed-certs/serial/Stop 12.19
316 TestStartStop/group/no-preload/serial/DeployApp 10.41
317 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.04
318 TestStartStop/group/no-preload/serial/Stop 12.41
319 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.32
320 TestStartStop/group/embed-certs/serial/SecondStart 51.51
321 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.29
322 TestStartStop/group/no-preload/serial/SecondStart 56.51
323 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
324 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
325 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
326 TestStartStop/group/embed-certs/serial/Pause 3.35
327 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
329 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 100.43
330 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.17
331 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.34
332 TestStartStop/group/no-preload/serial/Pause 4.06
334 TestStartStop/group/newest-cni/serial/FirstStart 43.87
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1
337 TestStartStop/group/newest-cni/serial/Stop 1.26
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
339 TestStartStop/group/newest-cni/serial/SecondStart 15.96
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
343 TestStartStop/group/newest-cni/serial/Pause 3.24
344 TestNetworkPlugins/group/auto/Start 51.09
345 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.61
346 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.54
347 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.32
348 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
349 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 51.95
350 TestNetworkPlugins/group/auto/KubeletFlags 0.48
351 TestNetworkPlugins/group/auto/NetCatPod 11.48
352 TestNetworkPlugins/group/auto/DNS 0.28
353 TestNetworkPlugins/group/auto/Localhost 0.19
354 TestNetworkPlugins/group/auto/HairPin 0.23
355 TestNetworkPlugins/group/kindnet/Start 99.79
356 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
357 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
358 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
359 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.86
360 TestNetworkPlugins/group/calico/Start 57.46
361 TestNetworkPlugins/group/calico/ControllerPod 6.01
362 TestNetworkPlugins/group/calico/KubeletFlags 0.3
363 TestNetworkPlugins/group/calico/NetCatPod 9.27
364 TestNetworkPlugins/group/calico/DNS 0.21
365 TestNetworkPlugins/group/calico/Localhost 0.16
366 TestNetworkPlugins/group/calico/HairPin 0.17
367 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
368 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
369 TestNetworkPlugins/group/kindnet/NetCatPod 11.45
370 TestNetworkPlugins/group/kindnet/DNS 0.28
371 TestNetworkPlugins/group/kindnet/Localhost 0.23
372 TestNetworkPlugins/group/kindnet/HairPin 0.23
373 TestNetworkPlugins/group/custom-flannel/Start 63.76
374 TestNetworkPlugins/group/enable-default-cni/Start 50.81
375 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
376 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.27
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.3
379 TestNetworkPlugins/group/custom-flannel/DNS 0.2
380 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
381 TestNetworkPlugins/group/custom-flannel/HairPin 0.34
382 TestNetworkPlugins/group/enable-default-cni/DNS 0.25
383 TestNetworkPlugins/group/enable-default-cni/Localhost 0.23
384 TestNetworkPlugins/group/enable-default-cni/HairPin 0.23
385 TestNetworkPlugins/group/flannel/Start 65.49
386 TestNetworkPlugins/group/bridge/Start 52.48
387 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
388 TestNetworkPlugins/group/bridge/NetCatPod 10.29
389 TestNetworkPlugins/group/flannel/ControllerPod 6
390 TestNetworkPlugins/group/bridge/DNS 0.21
391 TestNetworkPlugins/group/flannel/KubeletFlags 0.43
392 TestNetworkPlugins/group/bridge/Localhost 0.25
393 TestNetworkPlugins/group/bridge/HairPin 0.22
394 TestNetworkPlugins/group/flannel/NetCatPod 10.43
395 TestNetworkPlugins/group/flannel/DNS 0.23
396 TestNetworkPlugins/group/flannel/Localhost 0.2
397 TestNetworkPlugins/group/flannel/HairPin 0.2
x
+
TestDownloadOnly/v1.28.0/json-events (6.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-999612 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-999612 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.971195503s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0904 06:19:49.883852  877447 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I0904 06:19:49.883934  877447 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21409-875589/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-999612
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-999612: exit status 85 (95.156706ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-999612 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-999612 │ jenkins │ v1.36.0 │ 04 Sep 25 06:19 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 06:19:42
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 06:19:42.962522  877452 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:19:42.962699  877452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:19:42.962732  877452 out.go:374] Setting ErrFile to fd 2...
	I0904 06:19:42.962752  877452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:19:42.963123  877452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-875589/.minikube/bin
	W0904 06:19:42.963298  877452 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21409-875589/.minikube/config/config.json: open /home/jenkins/minikube-integration/21409-875589/.minikube/config/config.json: no such file or directory
	I0904 06:19:42.963784  877452 out.go:368] Setting JSON to true
	I0904 06:19:42.964745  877452 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14532,"bootTime":1756952251,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0904 06:19:42.964845  877452 start.go:140] virtualization:  
	I0904 06:19:42.968990  877452 out.go:99] [download-only-999612] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	W0904 06:19:42.969205  877452 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21409-875589/.minikube/cache/preloaded-tarball: no such file or directory
	I0904 06:19:42.969310  877452 notify.go:220] Checking for updates...
	I0904 06:19:42.973003  877452 out.go:171] MINIKUBE_LOCATION=21409
	I0904 06:19:42.976047  877452 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 06:19:42.978896  877452 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21409-875589/kubeconfig
	I0904 06:19:42.981663  877452 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-875589/.minikube
	I0904 06:19:42.984553  877452 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0904 06:19:42.990140  877452 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0904 06:19:42.990422  877452 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 06:19:43.025243  877452 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 06:19:43.025367  877452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:19:43.094438  877452 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-09-04 06:19:43.085304836 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0904 06:19:43.094548  877452 docker.go:318] overlay module found
	I0904 06:19:43.097552  877452 out.go:99] Using the docker driver based on user configuration
	I0904 06:19:43.097595  877452 start.go:304] selected driver: docker
	I0904 06:19:43.097608  877452 start.go:918] validating driver "docker" against <nil>
	I0904 06:19:43.097721  877452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:19:43.159033  877452 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-09-04 06:19:43.150219586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0904 06:19:43.159189  877452 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0904 06:19:43.159472  877452 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0904 06:19:43.159626  877452 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0904 06:19:43.162688  877452 out.go:171] Using Docker driver with root privileges
	I0904 06:19:43.165544  877452 cni.go:84] Creating CNI manager for ""
	I0904 06:19:43.165617  877452 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0904 06:19:43.165632  877452 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0904 06:19:43.165717  877452 start.go:348] cluster config:
	{Name:download-only-999612 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-999612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 06:19:43.168715  877452 out.go:99] Starting "download-only-999612" primary control-plane node in "download-only-999612" cluster
	I0904 06:19:43.168735  877452 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0904 06:19:43.171539  877452 out.go:99] Pulling base image v0.0.47-1756936034-21409 ...
	I0904 06:19:43.171579  877452 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I0904 06:19:43.171735  877452 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local docker daemon
	I0904 06:19:43.188504  877452 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc to local cache
	I0904 06:19:43.188700  877452 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local cache directory
	I0904 06:19:43.188806  877452 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc to local cache
	I0904 06:19:43.230464  877452 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I0904 06:19:43.230497  877452 cache.go:58] Caching tarball of preloaded images
	I0904 06:19:43.230655  877452 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I0904 06:19:43.234103  877452 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0904 06:19:43.234124  877452 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 ...
	I0904 06:19:43.323253  877452 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/21409-875589/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I0904 06:19:47.303949  877452 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 ...
	I0904 06:19:47.304072  877452 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21409-875589/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-999612 host does not exist
	  To start a cluster, run: "minikube start -p download-only-999612"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-999612
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (5.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-489529 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-489529 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.151872069s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (5.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0904 06:19:55.522690  877447 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
I0904 06:19:55.522727  877447 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21409-875589/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-489529
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-489529: exit status 85 (91.402167ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-999612 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-999612 │ jenkins │ v1.36.0 │ 04 Sep 25 06:19 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.36.0 │ 04 Sep 25 06:19 UTC │ 04 Sep 25 06:19 UTC │
	│ delete  │ -p download-only-999612                                                                                                                                                               │ download-only-999612 │ jenkins │ v1.36.0 │ 04 Sep 25 06:19 UTC │ 04 Sep 25 06:19 UTC │
	│ start   │ -o=json --download-only -p download-only-489529 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-489529 │ jenkins │ v1.36.0 │ 04 Sep 25 06:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 06:19:50
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 06:19:50.417959  877653 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:19:50.418091  877653 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:19:50.418108  877653 out.go:374] Setting ErrFile to fd 2...
	I0904 06:19:50.418114  877653 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:19:50.418375  877653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-875589/.minikube/bin
	I0904 06:19:50.418782  877653 out.go:368] Setting JSON to true
	I0904 06:19:50.419652  877653 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14540,"bootTime":1756952251,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0904 06:19:50.419720  877653 start.go:140] virtualization:  
	I0904 06:19:50.423271  877653 out.go:99] [download-only-489529] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0904 06:19:50.423542  877653 notify.go:220] Checking for updates...
	I0904 06:19:50.427417  877653 out.go:171] MINIKUBE_LOCATION=21409
	I0904 06:19:50.430436  877653 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 06:19:50.433346  877653 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21409-875589/kubeconfig
	I0904 06:19:50.436227  877653 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-875589/.minikube
	I0904 06:19:50.439138  877653 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0904 06:19:50.444548  877653 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0904 06:19:50.444788  877653 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 06:19:50.474675  877653 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 06:19:50.474795  877653 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:19:50.531261  877653 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:48 SystemTime:2025-09-04 06:19:50.521639429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0904 06:19:50.531376  877653 docker.go:318] overlay module found
	I0904 06:19:50.534318  877653 out.go:99] Using the docker driver based on user configuration
	I0904 06:19:50.534353  877653 start.go:304] selected driver: docker
	I0904 06:19:50.534367  877653 start.go:918] validating driver "docker" against <nil>
	I0904 06:19:50.534482  877653 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:19:50.586613  877653 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:48 SystemTime:2025-09-04 06:19:50.577090336 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0904 06:19:50.586767  877653 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0904 06:19:50.587040  877653 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0904 06:19:50.587204  877653 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0904 06:19:50.590346  877653 out.go:171] Using Docker driver with root privileges
	I0904 06:19:50.593204  877653 cni.go:84] Creating CNI manager for ""
	I0904 06:19:50.593292  877653 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0904 06:19:50.593308  877653 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0904 06:19:50.593389  877653 start.go:348] cluster config:
	{Name:download-only-489529 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-489529 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 06:19:50.596398  877653 out.go:99] Starting "download-only-489529" primary control-plane node in "download-only-489529" cluster
	I0904 06:19:50.596419  877653 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0904 06:19:50.599283  877653 out.go:99] Pulling base image v0.0.47-1756936034-21409 ...
	I0904 06:19:50.599315  877653 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0904 06:19:50.599524  877653 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local docker daemon
	I0904 06:19:50.615358  877653 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc to local cache
	I0904 06:19:50.615488  877653 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local cache directory
	I0904 06:19:50.615514  877653 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local cache directory, skipping pull
	I0904 06:19:50.615520  877653 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc exists in cache, skipping pull
	I0904 06:19:50.615530  877653 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc as a tarball
	I0904 06:19:50.653064  877653 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4
	I0904 06:19:50.653090  877653 cache.go:58] Caching tarball of preloaded images
	I0904 06:19:50.653257  877653 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0904 06:19:50.656306  877653 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0904 06:19:50.656334  877653 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4 ...
	I0904 06:19:50.745444  877653 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:08b8266a02e141b302c5f305615e1018 -> /home/jenkins/minikube-integration/21409-875589/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4
	I0904 06:19:53.982851  877653 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4 ...
	I0904 06:19:53.982962  877653 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21409-875589/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4 ...
	I0904 06:19:54.916402  877653 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0904 06:19:54.916806  877653 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/download-only-489529/config.json ...
	I0904 06:19:54.916841  877653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/download-only-489529/config.json: {Name:mk66df13db54394db020dd423c4b8c1754bad036 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:19:54.917060  877653 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0904 06:19:54.917227  877653 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21409-875589/.minikube/cache/linux/arm64/v1.34.0/kubectl
	
	
	* The control-plane node download-only-489529 host does not exist
	  To start a cluster, run: "minikube start -p download-only-489529"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-489529
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I0904 06:19:56.818464  877447 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-356916 --alsologtostderr --binary-mirror http://127.0.0.1:38895 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-356916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-356916
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-903438
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-903438: exit status 85 (86.554309ms)

                                                
                                                
-- stdout --
	* Profile "addons-903438" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-903438"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-903438
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-903438: exit status 85 (75.617214ms)

                                                
                                                
-- stdout --
	* Profile "addons-903438" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-903438"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (210.15s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-903438 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-903438 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m30.151915259s)
--- PASS: TestAddons/Setup (210.15s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.34s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 67.25052ms
addons_test.go:868: volcano-scheduler stabilized in 67.507402ms
addons_test.go:876: volcano-admission stabilized in 67.578886ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-799f64f894-hhscv" [cc9e0111-e67a-4578-ad19-935ddf6cbf5e] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003153374s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-589c7dd587-zww9f" [a73c87fd-eaac-4d31-9fd9-c00fb5cddfc2] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00347779s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-7dc6969b45-89jgt" [82c8aa42-e42e-4c84-bddc-d1f60c6fe379] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.005055241s
addons_test.go:903: (dbg) Run:  kubectl --context addons-903438 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-903438 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-903438 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [5f7ef079-b213-4e4e-9a35-42af82ad5955] Pending
helpers_test.go:352: "test-job-nginx-0" [5f7ef079-b213-4e4e-9a35-42af82ad5955] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [5f7ef079-b213-4e4e-9a35-42af82ad5955] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.003526104s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-903438 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-903438 addons disable volcano --alsologtostderr -v=1: (11.690724606s)
--- PASS: TestAddons/serial/Volcano (40.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-903438 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-903438 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.86s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-903438 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-903438 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [fa6cd449-c9fe-4f38-96fb-c557eac946a3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [fa6cd449-c9fe-4f38-96fb-c557eac946a3] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004696463s
addons_test.go:694: (dbg) Run:  kubectl --context addons-903438 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-903438 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-903438 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-903438 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.86s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 5.276816ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-8m9dr" [ad8c1363-4cf2-450d-88a6-d0fbc63ea467] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003435031s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-rn9vz" [52c5f936-4f70-461a-bdde-a51beb715702] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.0077139s
addons_test.go:392: (dbg) Run:  kubectl --context addons-903438 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-903438 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-903438 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.661322412s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-903438 ip
2025/09/04 06:24:45 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-903438 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.80s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.86s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 2.928717ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-903438
addons_test.go:332: (dbg) Run:  kubectl --context addons-903438 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-903438 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.86s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-903438 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-903438 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-903438 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [6b07cae7-b213-4d36-b90a-4c5829e2dca3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [6b07cae7-b213-4d36-b90a-4c5829e2dca3] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003945552s
I0904 06:26:03.057538  877447 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-903438 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-903438 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-903438 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-903438 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-903438 addons disable ingress-dns --alsologtostderr -v=1: (1.175058464s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-903438 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-903438 addons disable ingress --alsologtostderr -v=1: (7.941525841s)
--- PASS: TestAddons/parallel/Ingress (21.02s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.33s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-ggk95" [23063cec-9a41-44f4-873d-16fc3b3e95b3] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003821859s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-903438 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.33s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.19s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 8.540398ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-w5bwz" [c963c9d7-b152-4a7c-9725-4b98568503d8] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005851246s
addons_test.go:463: (dbg) Run:  kubectl --context addons-903438 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-903438 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-903438 addons disable metrics-server --alsologtostderr -v=1: (1.066162784s)
--- PASS: TestAddons/parallel/MetricsServer (6.19s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.04s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0904 06:25:11.651104  877447 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0904 06:25:11.654537  877447 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0904 06:25:11.654564  877447 kapi.go:107] duration metric: took 6.391442ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.40101ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-903438 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-903438 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [075d4969-c3ff-4580-a2eb-88c0b1ecb8c2] Pending
helpers_test.go:352: "task-pv-pod" [075d4969-c3ff-4580-a2eb-88c0b1ecb8c2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [075d4969-c3ff-4580-a2eb-88c0b1ecb8c2] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.007026956s
addons_test.go:572: (dbg) Run:  kubectl --context addons-903438 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-903438 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-903438 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-903438 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-903438 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-903438 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-903438 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [a69177ce-1278-40e7-831f-d01bc47cf6fd] Pending
helpers_test.go:352: "task-pv-pod-restore" [a69177ce-1278-40e7-831f-d01bc47cf6fd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [a69177ce-1278-40e7-831f-d01bc47cf6fd] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.008402674s
addons_test.go:614: (dbg) Run:  kubectl --context addons-903438 delete pod task-pv-pod-restore
addons_test.go:614: (dbg) Done: kubectl --context addons-903438 delete pod task-pv-pod-restore: (1.390099983s)
addons_test.go:618: (dbg) Run:  kubectl --context addons-903438 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-903438 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-903438 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-903438 addons disable volumesnapshots --alsologtostderr -v=1: (1.036427707s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-903438 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-903438 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.942732809s)
--- PASS: TestAddons/parallel/CSI (51.04s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-903438 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-903438 --alsologtostderr -v=1: (1.071191588s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6f46646d79-4vtx9" [47f9797c-f451-4207-adc7-d874f64af540] Pending
helpers_test.go:352: "headlamp-6f46646d79-4vtx9" [47f9797c-f451-4207-adc7-d874f64af540] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6f46646d79-4vtx9" [47f9797c-f451-4207-adc7-d874f64af540] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6f46646d79-4vtx9" [47f9797c-f451-4207-adc7-d874f64af540] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003865333s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-903438 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-903438 addons disable headlamp --alsologtostderr -v=1: (5.804602821s)
--- PASS: TestAddons/parallel/Headlamp (17.88s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-c55d4cb6d-g46bb" [ab0b1d0d-d9e1-405f-84fe-d55f538d6883] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003199849s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-903438 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.64s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.71s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-903438 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-903438 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-903438 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [83a2594b-6a45-4cb5-9ee6-fdbbce851b2b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [83a2594b-6a45-4cb5-9ee6-fdbbce851b2b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [83a2594b-6a45-4cb5-9ee6-fdbbce851b2b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004102149s
addons_test.go:967: (dbg) Run:  kubectl --context addons-903438 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-903438 ssh "cat /opt/local-path-provisioner/pvc-6aa63eb0-ba24-46af-ab92-52e9a2ec4d21_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-903438 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-903438 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-903438 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-903438 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.157822709s)
--- PASS: TestAddons/parallel/LocalPath (53.71s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.89s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-hjrzj" [f17cd18a-9c03-4fb4-b4b1-279613d40669] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.007566954s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-903438 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.89s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-6jt7p" [c0e74823-51ac-4def-ab03-cc63d3267206] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003597815s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-903438 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-903438 addons disable yakd --alsologtostderr -v=1: (5.907777353s)
--- PASS: TestAddons/parallel/Yakd (11.91s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.36s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-903438
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-903438: (12.062269479s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-903438
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-903438
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-903438
--- PASS: TestAddons/StoppedEnableDisable (12.36s)

                                                
                                    
x
+
TestCertOptions (37.59s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-398836 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-398836 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (34.9302342s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-398836 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-398836 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-398836 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-398836" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-398836
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-398836: (1.95514668s)
--- PASS: TestCertOptions (37.59s)

                                                
                                    
x
+
TestCertExpiration (232.79s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-597719 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-597719 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (40.453472589s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-597719 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-597719 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (9.836028905s)
helpers_test.go:175: Cleaning up "cert-expiration-597719" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-597719
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-597719: (2.500578681s)
--- PASS: TestCertExpiration (232.79s)

                                                
                                    
x
+
TestForceSystemdFlag (42.98s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-660098 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-660098 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (40.200044791s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-660098 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-660098" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-660098
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-660098: (2.440316539s)
--- PASS: TestForceSystemdFlag (42.98s)

                                                
                                    
x
+
TestForceSystemdEnv (42.8s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-898272 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-898272 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (40.246370138s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-898272 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-898272" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-898272
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-898272: (2.175527122s)
--- PASS: TestForceSystemdEnv (42.80s)

                                                
                                    
x
+
TestErrorSpam/setup (33.14s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-945588 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-945588 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-945588 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-945588 --driver=docker  --container-runtime=containerd: (33.137838871s)
--- PASS: TestErrorSpam/setup (33.14s)

                                                
                                    
x
+
TestErrorSpam/start (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-945588 --log_dir /tmp/nospam-945588 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-945588 --log_dir /tmp/nospam-945588 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-945588 --log_dir /tmp/nospam-945588 start --dry-run
--- PASS: TestErrorSpam/start (0.74s)

                                                
                                    
x
+
TestErrorSpam/status (1.12s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-945588 --log_dir /tmp/nospam-945588 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-945588 --log_dir /tmp/nospam-945588 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-945588 --log_dir /tmp/nospam-945588 status
--- PASS: TestErrorSpam/status (1.12s)

                                                
                                    
x
+
TestErrorSpam/pause (2.05s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-945588 --log_dir /tmp/nospam-945588 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-945588 --log_dir /tmp/nospam-945588 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-945588 --log_dir /tmp/nospam-945588 pause
--- PASS: TestErrorSpam/pause (2.05s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.96s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-945588 --log_dir /tmp/nospam-945588 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-945588 --log_dir /tmp/nospam-945588 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-945588 --log_dir /tmp/nospam-945588 unpause
--- PASS: TestErrorSpam/unpause (1.96s)

                                                
                                    
x
+
TestErrorSpam/stop (12.23s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-945588 --log_dir /tmp/nospam-945588 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-945588 --log_dir /tmp/nospam-945588 stop: (12.021343862s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-945588 --log_dir /tmp/nospam-945588 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-945588 --log_dir /tmp/nospam-945588 stop
--- PASS: TestErrorSpam/stop (12.23s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21409-875589/.minikube/files/etc/test/nested/copy/877447/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (92.42s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-037768 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0904 06:28:27.708715  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/addons-903438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:28:27.715781  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/addons-903438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:28:27.727125  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/addons-903438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:28:27.748565  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/addons-903438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:28:27.789944  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/addons-903438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:28:27.871355  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/addons-903438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:28:28.032866  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/addons-903438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:28:28.354829  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/addons-903438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:28:28.997216  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/addons-903438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:28:30.278995  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/addons-903438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:28:32.841163  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/addons-903438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:28:37.962495  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/addons-903438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:28:48.204271  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/addons-903438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:29:08.686288  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/addons-903438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:29:49.647933  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/addons-903438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-037768 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m32.422751271s)
--- PASS: TestFunctional/serial/StartWithProxy (92.42s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.59s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0904 06:29:51.571596  877447 config.go:182] Loaded profile config "functional-037768": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-037768 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-037768 --alsologtostderr -v=8: (6.58742539s)
functional_test.go:678: soft start took 6.588702775s for "functional-037768" cluster.
I0904 06:29:58.159331  877447 config.go:182] Loaded profile config "functional-037768": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (6.59s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-037768 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-037768 cache add registry.k8s.io/pause:3.1: (1.380278882s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-037768 cache add registry.k8s.io/pause:3.3: (1.844474001s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-037768 cache add registry.k8s.io/pause:latest: (1.168787134s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-037768 /tmp/TestFunctionalserialCacheCmdcacheadd_local3692520629/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 cache add minikube-local-cache-test:functional-037768
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 cache delete minikube-local-cache-test:functional-037768
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-037768
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-037768 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (316.401443ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-arm64 -p functional-037768 cache reload: (1.049988582s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 kubectl -- --context functional-037768 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-037768 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.3s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-037768 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-037768 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.304273636s)
functional_test.go:776: restart took 44.304401186s for "functional-037768" cluster.
I0904 06:30:51.129423  877447 config.go:182] Loaded profile config "functional-037768": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (44.30s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-037768 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.81s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-037768 logs: (1.806700302s)
--- PASS: TestFunctional/serial/LogsCmd (1.81s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 logs --file /tmp/TestFunctionalserialLogsFileCmd2462397654/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-037768 logs --file /tmp/TestFunctionalserialLogsFileCmd2462397654/001/logs.txt: (1.791791907s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.79s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.4s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-037768 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-037768
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-037768: exit status 115 (724.096304ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31499 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-037768 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-037768 delete -f testdata/invalidsvc.yaml: (1.376805886s)
--- PASS: TestFunctional/serial/InvalidService (5.40s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-037768 config get cpus: exit status 14 (110.910399ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-037768 config get cpus: exit status 14 (91.300317ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-037768 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-037768 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 918238: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.32s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-037768 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-037768 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (287.163547ms)

                                                
                                                
-- stdout --
	* [functional-037768] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-875589/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-875589/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 06:31:40.684448  917731 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:31:40.684606  917731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:31:40.684617  917731 out.go:374] Setting ErrFile to fd 2...
	I0904 06:31:40.684623  917731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:31:40.684913  917731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-875589/.minikube/bin
	I0904 06:31:40.685354  917731 out.go:368] Setting JSON to false
	I0904 06:31:40.686383  917731 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":15250,"bootTime":1756952251,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0904 06:31:40.686458  917731 start.go:140] virtualization:  
	I0904 06:31:40.690051  917731 out.go:179] * [functional-037768] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0904 06:31:40.693277  917731 notify.go:220] Checking for updates...
	I0904 06:31:40.696776  917731 out.go:179]   - MINIKUBE_LOCATION=21409
	I0904 06:31:40.699903  917731 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 06:31:40.702735  917731 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-875589/kubeconfig
	I0904 06:31:40.705842  917731 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-875589/.minikube
	I0904 06:31:40.708927  917731 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0904 06:31:40.713223  917731 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 06:31:40.716784  917731 config.go:182] Loaded profile config "functional-037768": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0904 06:31:40.717417  917731 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 06:31:40.758349  917731 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 06:31:40.758475  917731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:31:40.840032  917731 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-09-04 06:31:40.829357264 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0904 06:31:40.840148  917731 docker.go:318] overlay module found
	I0904 06:31:40.843298  917731 out.go:179] * Using the docker driver based on existing profile
	I0904 06:31:40.846200  917731 start.go:304] selected driver: docker
	I0904 06:31:40.846220  917731 start.go:918] validating driver "docker" against &{Name:functional-037768 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-037768 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 06:31:40.846338  917731 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 06:31:40.849817  917731 out.go:203] 
	W0904 06:31:40.852676  917731 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0904 06:31:40.855689  917731 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-037768 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-037768 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-037768 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (274.075794ms)

                                                
                                                
-- stdout --
	* [functional-037768] minikube v1.36.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-875589/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-875589/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 06:31:41.268066  917944 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:31:41.268284  917944 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:31:41.268346  917944 out.go:374] Setting ErrFile to fd 2...
	I0904 06:31:41.268367  917944 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:31:41.269790  917944 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-875589/.minikube/bin
	I0904 06:31:41.270234  917944 out.go:368] Setting JSON to false
	I0904 06:31:41.271242  917944 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":15251,"bootTime":1756952251,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0904 06:31:41.271344  917944 start.go:140] virtualization:  
	I0904 06:31:41.278888  917944 out.go:179] * [functional-037768] minikube v1.36.0 sur Ubuntu 20.04 (arm64)
	I0904 06:31:41.286671  917944 out.go:179]   - MINIKUBE_LOCATION=21409
	I0904 06:31:41.286859  917944 notify.go:220] Checking for updates...
	I0904 06:31:41.295465  917944 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 06:31:41.298388  917944 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-875589/kubeconfig
	I0904 06:31:41.301473  917944 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-875589/.minikube
	I0904 06:31:41.304460  917944 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0904 06:31:41.307541  917944 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 06:31:41.310929  917944 config.go:182] Loaded profile config "functional-037768": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0904 06:31:41.311606  917944 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 06:31:41.358598  917944 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 06:31:41.358728  917944 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:31:41.435183  917944 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-09-04 06:31:41.423692543 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0904 06:31:41.435292  917944 docker.go:318] overlay module found
	I0904 06:31:41.438491  917944 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0904 06:31:41.441401  917944 start.go:304] selected driver: docker
	I0904 06:31:41.441423  917944 start.go:918] validating driver "docker" against &{Name:functional-037768 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-037768 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 06:31:41.441523  917944 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 06:31:41.446219  917944 out.go:203] 
	W0904 06:31:41.449251  917944 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0904 06:31:41.454621  917944 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-037768 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-037768 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-2vbwn" [6478d45a-e639-4bcd-a261-c6f8092c41c5] Pending
helpers_test.go:352: "hello-node-connect-7d85dfc575-2vbwn" [6478d45a-e639-4bcd-a261-c6f8092c41c5] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.002926606s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30339
functional_test.go:1680: http://192.168.49.2:30339: success! body:
Request served by hello-node-connect-7d85dfc575-2vbwn

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30339
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [4b55e8f8-93c9-4619-9f86-8a949716fd33] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004933783s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-037768 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-037768 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-037768 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-037768 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [e6df8f6d-13dc-4bc6-9d5c-dbf9a6d1c669] Pending
helpers_test.go:352: "sp-pod" [e6df8f6d-13dc-4bc6-9d5c-dbf9a6d1c669] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [e6df8f6d-13dc-4bc6-9d5c-dbf9a6d1c669] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003918408s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-037768 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-037768 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-037768 delete -f testdata/storage-provisioner/pod.yaml: (1.148444125s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-037768 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [358ac41c-adbf-4909-98cf-186e160d0fa2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [358ac41c-adbf-4909-98cf-186e160d0fa2] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003564634s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-037768 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.30s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh -n functional-037768 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 cp functional-037768:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2028156123/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh -n functional-037768 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh -n functional-037768 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/877447/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh "sudo cat /etc/test/nested/copy/877447/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/877447.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh "sudo cat /etc/ssl/certs/877447.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/877447.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh "sudo cat /usr/share/ca-certificates/877447.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/8774472.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh "sudo cat /etc/ssl/certs/8774472.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/8774472.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh "sudo cat /usr/share/ca-certificates/8774472.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-037768 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-037768 ssh "sudo systemctl is-active docker": exit status 1 (336.376381ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-037768 ssh "sudo systemctl is-active crio": exit status 1 (316.541121ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-037768 version -o=json --components: (1.584957748s)
--- PASS: TestFunctional/parallel/Version/components (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-037768 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-037768
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-037768
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-037768 image ls --format short --alsologtostderr:
I0904 06:31:43.870457  918495 out.go:360] Setting OutFile to fd 1 ...
I0904 06:31:43.870596  918495 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:31:43.870608  918495 out.go:374] Setting ErrFile to fd 2...
I0904 06:31:43.870626  918495 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:31:43.870915  918495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-875589/.minikube/bin
I0904 06:31:43.871569  918495 config.go:182] Loaded profile config "functional-037768": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0904 06:31:43.871732  918495 config.go:182] Loaded profile config "functional-037768": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0904 06:31:43.872258  918495 cli_runner.go:164] Run: docker container inspect functional-037768 --format={{.State.Status}}
I0904 06:31:43.891277  918495 ssh_runner.go:195] Run: systemctl --version
I0904 06:31:43.891447  918495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-037768
I0904 06:31:43.917981  918495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33894 SSHKeyPath:/home/jenkins/minikube-integration/21409-875589/.minikube/machines/functional-037768/id_rsa Username:docker}
I0904 06:31:44.008599  918495 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-037768 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/minikube-local-cache-test │ functional-037768  │ sha256:2ec621 │ 992B   │
│ docker.io/library/nginx                     │ alpine             │ sha256:35f3cb │ 22.9MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.0            │ sha256:d29193 │ 24.6MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ docker.io/library/nginx                     │ latest             │ sha256:47ef87 │ 68.9MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.0            │ sha256:996be7 │ 20.7MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ docker.io/kicbase/echo-server               │ functional-037768  │ sha256:ce2d2c │ 2.17MB │
│ docker.io/kicbase/echo-server               │ latest             │ sha256:ce2d2c │ 2.17MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:a18947 │ 98.2MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.0            │ sha256:6fc32d │ 22.8MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.0            │ sha256:a25f5e │ 15.8MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-037768 image ls --format table --alsologtostderr:
I0904 06:31:44.451018  918565 out.go:360] Setting OutFile to fd 1 ...
I0904 06:31:44.451354  918565 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:31:44.451381  918565 out.go:374] Setting ErrFile to fd 2...
I0904 06:31:44.451430  918565 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:31:44.451842  918565 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-875589/.minikube/bin
I0904 06:31:44.452907  918565 config.go:182] Loaded profile config "functional-037768": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0904 06:31:44.453226  918565 config.go:182] Loaded profile config "functional-037768": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0904 06:31:44.454446  918565 cli_runner.go:164] Run: docker container inspect functional-037768 --format={{.State.Status}}
I0904 06:31:44.484570  918565 ssh_runner.go:195] Run: systemctl --version
I0904 06:31:44.484629  918565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-037768
I0904 06:31:44.511995  918565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33894 SSHKeyPath:/home/jenkins/minikube-integration/21409-875589/.minikube/machines/functional-037768/id_rsa Username:docker}
I0904 06:31:44.614144  918565 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-037768 image ls --format json --alsologtostderr:
[{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be","repoDigests":["registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"24570751"},{"id":"sha256:996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"20720494"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209e
a6801edcac8a92c8b1104dacd66a583ed6"],"repoTags":["docker.io/kicbase/echo-server:functional-037768","docker.io/kicbase/echo-server:latest"],"size":"2173567"},{"id":"sha256:6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"22788036"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:
2ec62198901b11e2cc5af69db366f24b2002271d2634963629e0e4c529f0c4ba","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-037768"],"size":"992"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee","repoDigests":["registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"15779792"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:47ef8710c9f5a9276b3e347e3ab71ee44c848
3e20f8636380ae2737ef4c27758","repoDigests":["docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57"],"repoTags":["docker.io/library/nginx:latest"],"size":"68855984"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"98207481"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936","repoDigests":["docker.io/library/nginx@sha256:42
a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8"],"repoTags":["docker.io/library/nginx:alpine"],"size":"22948447"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-037768 image ls --format json --alsologtostderr:
I0904 06:31:44.113214  918527 out.go:360] Setting OutFile to fd 1 ...
I0904 06:31:44.113315  918527 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:31:44.113320  918527 out.go:374] Setting ErrFile to fd 2...
I0904 06:31:44.113324  918527 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:31:44.113599  918527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-875589/.minikube/bin
I0904 06:31:44.114331  918527 config.go:182] Loaded profile config "functional-037768": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0904 06:31:44.114465  918527 config.go:182] Loaded profile config "functional-037768": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0904 06:31:44.114958  918527 cli_runner.go:164] Run: docker container inspect functional-037768 --format={{.State.Status}}
I0904 06:31:44.142389  918527 ssh_runner.go:195] Run: systemctl --version
I0904 06:31:44.142449  918527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-037768
I0904 06:31:44.165173  918527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33894 SSHKeyPath:/home/jenkins/minikube-integration/21409-875589/.minikube/machines/functional-037768/id_rsa Username:docker}
I0904 06:31:44.273740  918527 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-037768 image ls --format yaml --alsologtostderr:
- id: sha256:35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936
repoDigests:
- docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8
repoTags:
- docker.io/library/nginx:alpine
size: "22948447"
- id: sha256:6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "22788036"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "15779792"
- id: sha256:2ec62198901b11e2cc5af69db366f24b2002271d2634963629e0e4c529f0c4ba
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-037768
size: "992"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "98207481"
- id: sha256:996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "20720494"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "24570751"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
repoTags:
- docker.io/kicbase/echo-server:functional-037768
- docker.io/kicbase/echo-server:latest
size: "2173567"
- id: sha256:47ef8710c9f5a9276b3e347e3ab71ee44c8483e20f8636380ae2737ef4c27758
repoDigests:
- docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57
repoTags:
- docker.io/library/nginx:latest
size: "68855984"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-037768 image ls --format yaml --alsologtostderr:
I0904 06:31:44.772987  918597 out.go:360] Setting OutFile to fd 1 ...
I0904 06:31:44.773749  918597 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:31:44.773761  918597 out.go:374] Setting ErrFile to fd 2...
I0904 06:31:44.773767  918597 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:31:44.774100  918597 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-875589/.minikube/bin
I0904 06:31:44.775010  918597 config.go:182] Loaded profile config "functional-037768": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0904 06:31:44.775146  918597 config.go:182] Loaded profile config "functional-037768": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0904 06:31:44.775868  918597 cli_runner.go:164] Run: docker container inspect functional-037768 --format={{.State.Status}}
I0904 06:31:44.804420  918597 ssh_runner.go:195] Run: systemctl --version
I0904 06:31:44.804491  918597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-037768
I0904 06:31:44.828257  918597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33894 SSHKeyPath:/home/jenkins/minikube-integration/21409-875589/.minikube/machines/functional-037768/id_rsa Username:docker}
I0904 06:31:44.926453  918597 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-037768 ssh pgrep buildkitd: exit status 1 (421.34775ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 image build -t localhost/my-image:functional-037768 testdata/build --alsologtostderr
2025/09/04 06:31:49 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-037768 image build -t localhost/my-image:functional-037768 testdata/build --alsologtostderr: (3.981889349s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-037768 image build -t localhost/my-image:functional-037768 testdata/build --alsologtostderr:
I0904 06:31:45.478010  918731 out.go:360] Setting OutFile to fd 1 ...
I0904 06:31:45.483322  918731 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:31:45.483343  918731 out.go:374] Setting ErrFile to fd 2...
I0904 06:31:45.483350  918731 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:31:45.483641  918731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-875589/.minikube/bin
I0904 06:31:45.484374  918731 config.go:182] Loaded profile config "functional-037768": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0904 06:31:45.486204  918731 config.go:182] Loaded profile config "functional-037768": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0904 06:31:45.486735  918731 cli_runner.go:164] Run: docker container inspect functional-037768 --format={{.State.Status}}
I0904 06:31:45.507104  918731 ssh_runner.go:195] Run: systemctl --version
I0904 06:31:45.507174  918731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-037768
I0904 06:31:45.537376  918731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33894 SSHKeyPath:/home/jenkins/minikube-integration/21409-875589/.minikube/machines/functional-037768/id_rsa Username:docker}
I0904 06:31:45.630735  918731 build_images.go:161] Building image from path: /tmp/build.528062611.tar
I0904 06:31:45.630830  918731 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0904 06:31:45.642763  918731 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.528062611.tar
I0904 06:31:45.648892  918731 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.528062611.tar: stat -c "%s %y" /var/lib/minikube/build/build.528062611.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.528062611.tar': No such file or directory
I0904 06:31:45.648921  918731 ssh_runner.go:362] scp /tmp/build.528062611.tar --> /var/lib/minikube/build/build.528062611.tar (3072 bytes)
I0904 06:31:45.678590  918731 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.528062611
I0904 06:31:45.688193  918731 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.528062611 -xf /var/lib/minikube/build/build.528062611.tar
I0904 06:31:45.698544  918731 containerd.go:394] Building image: /var/lib/minikube/build/build.528062611
I0904 06:31:45.698695  918731 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.528062611 --local dockerfile=/var/lib/minikube/build/build.528062611 --output type=image,name=localhost/my-image:functional-037768
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:c776e512aadf33a1df0d3b155fe105cf421563de4981dfadbbb1e30511378ae4 0.0s done
#8 exporting config sha256:95c043fdaf1f3b51459b058ce4f74ccca7577a915a8b6bec87fea6b05e7b25d9 0.0s done
#8 naming to localhost/my-image:functional-037768 done
#8 DONE 0.2s
I0904 06:31:49.349582  918731 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.528062611 --local dockerfile=/var/lib/minikube/build/build.528062611 --output type=image,name=localhost/my-image:functional-037768: (3.650838344s)
I0904 06:31:49.349718  918731 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.528062611
I0904 06:31:49.365659  918731 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.528062611.tar
I0904 06:31:49.376958  918731 build_images.go:217] Built localhost/my-image:functional-037768 from /tmp/build.528062611.tar
I0904 06:31:49.376991  918731 build_images.go:133] succeeded building to: functional-037768
I0904 06:31:49.377002  918731 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-037768
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 image load --daemon kicbase/echo-server:functional-037768 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-037768 image load --daemon kicbase/echo-server:functional-037768 --alsologtostderr: (1.228611698s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 image load --daemon kicbase/echo-server:functional-037768 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-037768 image load --daemon kicbase/echo-server:functional-037768 --alsologtostderr: (1.09914883s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-037768 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-037768 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-s7rpd" [9428517d-9cd6-4b6e-a702-253734c18282] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-s7rpd" [9428517d-9cd6-4b6e-a702-253734c18282] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004200887s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-037768
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 image load --daemon kicbase/echo-server:functional-037768 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 image save kicbase/echo-server:functional-037768 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 image rm kicbase/echo-server:functional-037768 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-037768
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 image save --daemon kicbase/echo-server:functional-037768 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-037768
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-037768 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-037768 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-037768 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-037768 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 914440: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-037768 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-037768 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [e229dae7-1ee7-4d1a-94be-a58713451d9a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0904 06:31:11.569816  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/addons-903438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "nginx-svc" [e229dae7-1ee7-4d1a-94be-a58713451d9a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.00403808s
I0904 06:31:18.187848  877447 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 service list -o json
functional_test.go:1504: Took "336.163747ms" to run "out/minikube-linux-arm64 -p functional-037768 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30671
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30671
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-037768 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.181.198 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-037768 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "363.756763ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "57.921959ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "348.096302ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "61.102685ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-037768 /tmp/TestFunctionalparallelMountCmdany-port2963566281/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1756967489425709492" to /tmp/TestFunctionalparallelMountCmdany-port2963566281/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1756967489425709492" to /tmp/TestFunctionalparallelMountCmdany-port2963566281/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1756967489425709492" to /tmp/TestFunctionalparallelMountCmdany-port2963566281/001/test-1756967489425709492
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-037768 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (318.73185ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0904 06:31:29.745867  877447 retry.go:31] will retry after 628.381484ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  4 06:31 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  4 06:31 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  4 06:31 test-1756967489425709492
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh cat /mount-9p/test-1756967489425709492
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-037768 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [0186a155-23ad-46bc-b419-fced8eb574a4] Pending
helpers_test.go:352: "busybox-mount" [0186a155-23ad-46bc-b419-fced8eb574a4] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [0186a155-23ad-46bc-b419-fced8eb574a4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [0186a155-23ad-46bc-b419-fced8eb574a4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003443998s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-037768 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-037768 /tmp/TestFunctionalparallelMountCmdany-port2963566281/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-037768 /tmp/TestFunctionalparallelMountCmdspecific-port2948492957/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-037768 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (347.210691ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0904 06:31:37.969337  877447 retry.go:31] will retry after 328.68401ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-037768 /tmp/TestFunctionalparallelMountCmdspecific-port2948492957/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-037768 ssh "sudo umount -f /mount-9p": exit status 1 (258.080946ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-037768 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-037768 /tmp/TestFunctionalparallelMountCmdspecific-port2948492957/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-037768 /tmp/TestFunctionalparallelMountCmdVerifyCleanup605453486/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-037768 /tmp/TestFunctionalparallelMountCmdVerifyCleanup605453486/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-037768 /tmp/TestFunctionalparallelMountCmdVerifyCleanup605453486/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-037768 ssh "findmnt -T" /mount1: exit status 1 (536.704574ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0904 06:31:39.848853  877447 retry.go:31] will retry after 308.981872ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-037768 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-037768 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-037768 /tmp/TestFunctionalparallelMountCmdVerifyCleanup605453486/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-037768 /tmp/TestFunctionalparallelMountCmdVerifyCleanup605453486/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-037768 /tmp/TestFunctionalparallelMountCmdVerifyCleanup605453486/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.98s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-037768
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-037768
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-037768
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (120.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E0904 06:33:27.706787  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/addons-903438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-055064 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m59.169633362s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (120.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (22.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 kubectl -- rollout status deployment/busybox
E0904 06:33:55.411500  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/addons-903438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-055064 kubectl -- rollout status deployment/busybox: (19.889807436s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 kubectl -- exec busybox-7b57f96db7-jqj9k -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 kubectl -- exec busybox-7b57f96db7-xdgsm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 kubectl -- exec busybox-7b57f96db7-xrwz7 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 kubectl -- exec busybox-7b57f96db7-jqj9k -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 kubectl -- exec busybox-7b57f96db7-xdgsm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 kubectl -- exec busybox-7b57f96db7-xrwz7 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 kubectl -- exec busybox-7b57f96db7-jqj9k -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 kubectl -- exec busybox-7b57f96db7-xdgsm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 kubectl -- exec busybox-7b57f96db7-xrwz7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (22.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 kubectl -- exec busybox-7b57f96db7-jqj9k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 kubectl -- exec busybox-7b57f96db7-jqj9k -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 kubectl -- exec busybox-7b57f96db7-xdgsm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 kubectl -- exec busybox-7b57f96db7-xdgsm -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 kubectl -- exec busybox-7b57f96db7-xrwz7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 kubectl -- exec busybox-7b57f96db7-xrwz7 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (16.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-055064 node add --alsologtostderr -v 5: (15.701994896s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-055064 status --alsologtostderr -v 5: (1.203785691s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (16.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-055064 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.320189127s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-055064 status --output json --alsologtostderr -v 5: (1.052178209s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 cp testdata/cp-test.txt ha-055064:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 cp ha-055064:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3374837945/001/cp-test_ha-055064.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 cp ha-055064:/home/docker/cp-test.txt ha-055064-m02:/home/docker/cp-test_ha-055064_ha-055064-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064-m02 "sudo cat /home/docker/cp-test_ha-055064_ha-055064-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 cp ha-055064:/home/docker/cp-test.txt ha-055064-m03:/home/docker/cp-test_ha-055064_ha-055064-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064-m03 "sudo cat /home/docker/cp-test_ha-055064_ha-055064-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 cp ha-055064:/home/docker/cp-test.txt ha-055064-m04:/home/docker/cp-test_ha-055064_ha-055064-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064-m04 "sudo cat /home/docker/cp-test_ha-055064_ha-055064-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 cp testdata/cp-test.txt ha-055064-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 cp ha-055064-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3374837945/001/cp-test_ha-055064-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 cp ha-055064-m02:/home/docker/cp-test.txt ha-055064:/home/docker/cp-test_ha-055064-m02_ha-055064.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064 "sudo cat /home/docker/cp-test_ha-055064-m02_ha-055064.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 cp ha-055064-m02:/home/docker/cp-test.txt ha-055064-m03:/home/docker/cp-test_ha-055064-m02_ha-055064-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064-m03 "sudo cat /home/docker/cp-test_ha-055064-m02_ha-055064-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 cp ha-055064-m02:/home/docker/cp-test.txt ha-055064-m04:/home/docker/cp-test_ha-055064-m02_ha-055064-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064-m04 "sudo cat /home/docker/cp-test_ha-055064-m02_ha-055064-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 cp testdata/cp-test.txt ha-055064-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 cp ha-055064-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3374837945/001/cp-test_ha-055064-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 cp ha-055064-m03:/home/docker/cp-test.txt ha-055064:/home/docker/cp-test_ha-055064-m03_ha-055064.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064 "sudo cat /home/docker/cp-test_ha-055064-m03_ha-055064.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 cp ha-055064-m03:/home/docker/cp-test.txt ha-055064-m02:/home/docker/cp-test_ha-055064-m03_ha-055064-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064-m02 "sudo cat /home/docker/cp-test_ha-055064-m03_ha-055064-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 cp ha-055064-m03:/home/docker/cp-test.txt ha-055064-m04:/home/docker/cp-test_ha-055064-m03_ha-055064-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064-m04 "sudo cat /home/docker/cp-test_ha-055064-m03_ha-055064-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 cp testdata/cp-test.txt ha-055064-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 cp ha-055064-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3374837945/001/cp-test_ha-055064-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 cp ha-055064-m04:/home/docker/cp-test.txt ha-055064:/home/docker/cp-test_ha-055064-m04_ha-055064.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064 "sudo cat /home/docker/cp-test_ha-055064-m04_ha-055064.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 cp ha-055064-m04:/home/docker/cp-test.txt ha-055064-m02:/home/docker/cp-test_ha-055064-m04_ha-055064-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064-m02 "sudo cat /home/docker/cp-test_ha-055064-m04_ha-055064-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 cp ha-055064-m04:/home/docker/cp-test.txt ha-055064-m03:/home/docker/cp-test_ha-055064-m04_ha-055064-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 ssh -n ha-055064-m03 "sudo cat /home/docker/cp-test_ha-055064-m04_ha-055064-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-055064 node stop m02 --alsologtostderr -v 5: (12.091190829s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-055064 status --alsologtostderr -v 5: exit status 7 (770.456355ms)

                                                
                                                
-- stdout --
	ha-055064
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-055064-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-055064-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-055064-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 06:35:07.560135  935425 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:35:07.560282  935425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:35:07.560294  935425 out.go:374] Setting ErrFile to fd 2...
	I0904 06:35:07.560300  935425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:35:07.560581  935425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-875589/.minikube/bin
	I0904 06:35:07.560790  935425 out.go:368] Setting JSON to false
	I0904 06:35:07.560824  935425 mustload.go:65] Loading cluster: ha-055064
	I0904 06:35:07.561294  935425 notify.go:220] Checking for updates...
	I0904 06:35:07.562202  935425 config.go:182] Loaded profile config "ha-055064": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0904 06:35:07.562234  935425 status.go:174] checking status of ha-055064 ...
	I0904 06:35:07.564125  935425 cli_runner.go:164] Run: docker container inspect ha-055064 --format={{.State.Status}}
	I0904 06:35:07.588832  935425 status.go:371] ha-055064 host status = "Running" (err=<nil>)
	I0904 06:35:07.588854  935425 host.go:66] Checking if "ha-055064" exists ...
	I0904 06:35:07.589296  935425 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-055064
	I0904 06:35:07.622794  935425 host.go:66] Checking if "ha-055064" exists ...
	I0904 06:35:07.623140  935425 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 06:35:07.623196  935425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-055064
	I0904 06:35:07.660007  935425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33899 SSHKeyPath:/home/jenkins/minikube-integration/21409-875589/.minikube/machines/ha-055064/id_rsa Username:docker}
	I0904 06:35:07.758789  935425 ssh_runner.go:195] Run: systemctl --version
	I0904 06:35:07.763457  935425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 06:35:07.784184  935425 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:35:07.849378  935425 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:true NGoroutines:72 SystemTime:2025-09-04 06:35:07.838100285 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0904 06:35:07.850017  935425 kubeconfig.go:125] found "ha-055064" server: "https://192.168.49.254:8443"
	I0904 06:35:07.850056  935425 api_server.go:166] Checking apiserver status ...
	I0904 06:35:07.850100  935425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 06:35:07.862891  935425 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1509/cgroup
	I0904 06:35:07.872997  935425 api_server.go:182] apiserver freezer: "10:freezer:/docker/24cf8d250bebc6651a4f1ee51cc6f6f46572a1f773bdcfca06daa0b1f3ef4831/kubepods/burstable/pod2964bb4b6ad6c718b387042d3f87419f/7fef3332c8f83b56663f68052d54589c2a784d043c9af457d86aa15fe919be3e"
	I0904 06:35:07.873116  935425 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/24cf8d250bebc6651a4f1ee51cc6f6f46572a1f773bdcfca06daa0b1f3ef4831/kubepods/burstable/pod2964bb4b6ad6c718b387042d3f87419f/7fef3332c8f83b56663f68052d54589c2a784d043c9af457d86aa15fe919be3e/freezer.state
	I0904 06:35:07.882469  935425 api_server.go:204] freezer state: "THAWED"
	I0904 06:35:07.882500  935425 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0904 06:35:07.891253  935425 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0904 06:35:07.891283  935425 status.go:463] ha-055064 apiserver status = Running (err=<nil>)
	I0904 06:35:07.891304  935425 status.go:176] ha-055064 status: &{Name:ha-055064 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 06:35:07.891325  935425 status.go:174] checking status of ha-055064-m02 ...
	I0904 06:35:07.891646  935425 cli_runner.go:164] Run: docker container inspect ha-055064-m02 --format={{.State.Status}}
	I0904 06:35:07.909225  935425 status.go:371] ha-055064-m02 host status = "Stopped" (err=<nil>)
	I0904 06:35:07.909253  935425 status.go:384] host is not running, skipping remaining checks
	I0904 06:35:07.909260  935425 status.go:176] ha-055064-m02 status: &{Name:ha-055064-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 06:35:07.909281  935425 status.go:174] checking status of ha-055064-m03 ...
	I0904 06:35:07.909590  935425 cli_runner.go:164] Run: docker container inspect ha-055064-m03 --format={{.State.Status}}
	I0904 06:35:07.928915  935425 status.go:371] ha-055064-m03 host status = "Running" (err=<nil>)
	I0904 06:35:07.928943  935425 host.go:66] Checking if "ha-055064-m03" exists ...
	I0904 06:35:07.929362  935425 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-055064-m03
	I0904 06:35:07.946329  935425 host.go:66] Checking if "ha-055064-m03" exists ...
	I0904 06:35:07.946659  935425 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 06:35:07.946704  935425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-055064-m03
	I0904 06:35:07.964932  935425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33909 SSHKeyPath:/home/jenkins/minikube-integration/21409-875589/.minikube/machines/ha-055064-m03/id_rsa Username:docker}
	I0904 06:35:08.058839  935425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 06:35:08.073995  935425 kubeconfig.go:125] found "ha-055064" server: "https://192.168.49.254:8443"
	I0904 06:35:08.074025  935425 api_server.go:166] Checking apiserver status ...
	I0904 06:35:08.074067  935425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 06:35:08.086803  935425 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1449/cgroup
	I0904 06:35:08.096647  935425 api_server.go:182] apiserver freezer: "10:freezer:/docker/04030eeb565777859e45ccc8172f183877a605ac3fc4800e9426e8c29e283ae8/kubepods/burstable/podf8532e6ca29671fc651db6c6c122e596/f9e597d00129eb3bed50c9f3c4578fd4a95b4e11eb0b783c846ccfda8f28a187"
	I0904 06:35:08.096727  935425 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/04030eeb565777859e45ccc8172f183877a605ac3fc4800e9426e8c29e283ae8/kubepods/burstable/podf8532e6ca29671fc651db6c6c122e596/f9e597d00129eb3bed50c9f3c4578fd4a95b4e11eb0b783c846ccfda8f28a187/freezer.state
	I0904 06:35:08.106348  935425 api_server.go:204] freezer state: "THAWED"
	I0904 06:35:08.106377  935425 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0904 06:35:08.114997  935425 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0904 06:35:08.115075  935425 status.go:463] ha-055064-m03 apiserver status = Running (err=<nil>)
	I0904 06:35:08.115092  935425 status.go:176] ha-055064-m03 status: &{Name:ha-055064-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 06:35:08.115115  935425 status.go:174] checking status of ha-055064-m04 ...
	I0904 06:35:08.115485  935425 cli_runner.go:164] Run: docker container inspect ha-055064-m04 --format={{.State.Status}}
	I0904 06:35:08.134220  935425 status.go:371] ha-055064-m04 host status = "Running" (err=<nil>)
	I0904 06:35:08.134244  935425 host.go:66] Checking if "ha-055064-m04" exists ...
	I0904 06:35:08.134553  935425 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-055064-m04
	I0904 06:35:08.153620  935425 host.go:66] Checking if "ha-055064-m04" exists ...
	I0904 06:35:08.153967  935425 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 06:35:08.154021  935425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-055064-m04
	I0904 06:35:08.172468  935425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33914 SSHKeyPath:/home/jenkins/minikube-integration/21409-875589/.minikube/machines/ha-055064-m04/id_rsa Username:docker}
	I0904 06:35:08.262305  935425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 06:35:08.274981  935425 status.go:176] ha-055064-m04 status: &{Name:ha-055064-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (13.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-055064 node start m02 --alsologtostderr -v 5: (11.852703265s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-055064 status --alsologtostderr -v 5: (1.476016868s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (13.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.307275638s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (97.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-055064 stop --alsologtostderr -v 5: (37.216693399s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 start --wait true --alsologtostderr -v 5
E0904 06:36:05.440093  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/functional-037768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:36:05.446731  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/functional-037768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:36:05.458079  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/functional-037768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:36:05.479440  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/functional-037768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:36:05.520643  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/functional-037768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:36:05.602650  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/functional-037768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:36:05.765036  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/functional-037768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:36:06.086773  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/functional-037768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:36:06.728306  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/functional-037768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:36:08.010367  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/functional-037768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:36:10.572303  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/functional-037768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:36:15.693988  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/functional-037768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:36:25.935304  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/functional-037768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:36:46.416942  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/functional-037768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-055064 start --wait true --alsologtostderr -v 5: (1m0.486917568s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (97.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-055064 node delete m03 --alsologtostderr -v 5: (9.792563177s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 stop --alsologtostderr -v 5
E0904 06:37:27.378463  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/functional-037768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-055064 stop --alsologtostderr -v 5: (35.907660178s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-055064 status --alsologtostderr -v 5: exit status 7 (107.816764ms)

                                                
                                                
-- stdout --
	ha-055064
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-055064-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-055064-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 06:37:49.194292  950373 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:37:49.194636  950373 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:37:49.194651  950373 out.go:374] Setting ErrFile to fd 2...
	I0904 06:37:49.194658  950373 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:37:49.194931  950373 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-875589/.minikube/bin
	I0904 06:37:49.195142  950373 out.go:368] Setting JSON to false
	I0904 06:37:49.195186  950373 mustload.go:65] Loading cluster: ha-055064
	I0904 06:37:49.195276  950373 notify.go:220] Checking for updates...
	I0904 06:37:49.195650  950373 config.go:182] Loaded profile config "ha-055064": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0904 06:37:49.195676  950373 status.go:174] checking status of ha-055064 ...
	I0904 06:37:49.196518  950373 cli_runner.go:164] Run: docker container inspect ha-055064 --format={{.State.Status}}
	I0904 06:37:49.214388  950373 status.go:371] ha-055064 host status = "Stopped" (err=<nil>)
	I0904 06:37:49.214413  950373 status.go:384] host is not running, skipping remaining checks
	I0904 06:37:49.214420  950373 status.go:176] ha-055064 status: &{Name:ha-055064 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 06:37:49.214444  950373 status.go:174] checking status of ha-055064-m02 ...
	I0904 06:37:49.214753  950373 cli_runner.go:164] Run: docker container inspect ha-055064-m02 --format={{.State.Status}}
	I0904 06:37:49.236067  950373 status.go:371] ha-055064-m02 host status = "Stopped" (err=<nil>)
	I0904 06:37:49.236142  950373 status.go:384] host is not running, skipping remaining checks
	I0904 06:37:49.236152  950373 status.go:176] ha-055064-m02 status: &{Name:ha-055064-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 06:37:49.236171  950373 status.go:174] checking status of ha-055064-m04 ...
	I0904 06:37:49.236490  950373 cli_runner.go:164] Run: docker container inspect ha-055064-m04 --format={{.State.Status}}
	I0904 06:37:49.254826  950373 status.go:371] ha-055064-m04 host status = "Stopped" (err=<nil>)
	I0904 06:37:49.254851  950373 status.go:384] host is not running, skipping remaining checks
	I0904 06:37:49.254864  950373 status.go:176] ha-055064-m04 status: &{Name:ha-055064-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (68.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E0904 06:38:27.707023  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/addons-903438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:38:49.300442  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/functional-037768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-055064 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m7.368892363s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (68.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (39.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-055064 node add --control-plane --alsologtostderr -v 5: (38.589457691s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-055064 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-055064 status --alsologtostderr -v 5: (1.34404317s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (39.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.17589528s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.18s)

                                                
                                    
x
+
TestJSONOutput/start/Command (93.7s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-511885 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
E0904 06:41:05.441715  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/functional-037768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-511885 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (1m33.699878681s)
--- PASS: TestJSONOutput/start/Command (93.70s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-511885 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-511885 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.28s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-511885 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-511885 --output=json --user=testUser: (1.279451921s)
--- PASS: TestJSONOutput/stop/Command (1.28s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-392088 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-392088 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (95.631224ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2c67dfdf-9d5f-4551-bd5d-1ddfb85fdedd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-392088] minikube v1.36.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1b03c80a-3cb8-4cea-8f83-fa74197fc899","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21409"}}
	{"specversion":"1.0","id":"04a9aa14-e24a-4542-93db-9159f37faca2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9737c2d8-2df2-409d-bff5-7d8529763913","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21409-875589/kubeconfig"}}
	{"specversion":"1.0","id":"a7c5eca0-e178-4c83-9285-65e48ac71317","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-875589/.minikube"}}
	{"specversion":"1.0","id":"edc29352-14d9-4d4d-a1c9-939faca3f7ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"26796d49-f3c4-48e6-94ff-dc7227496f5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"26df43ea-038b-4ae3-92fb-0624b6a99eef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-392088" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-392088
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.8s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-400931 --network=
E0904 06:41:33.145185  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/functional-037768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-400931 --network=: (34.693440475s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-400931" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-400931
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-400931: (2.080476253s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.80s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.44s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-107512 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-107512 --network=bridge: (32.443296835s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-107512" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-107512
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-107512: (1.97275298s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.44s)

                                                
                                    
x
+
TestKicExistingNetwork (32.84s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0904 06:42:40.247060  877447 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0904 06:42:40.262736  877447 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0904 06:42:40.262830  877447 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0904 06:42:40.262852  877447 cli_runner.go:164] Run: docker network inspect existing-network
W0904 06:42:40.278599  877447 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0904 06:42:40.278632  877447 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0904 06:42:40.278649  877447 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0904 06:42:40.278746  877447 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0904 06:42:40.295030  877447 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-704187b6c87d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:8e:f9:20:05:99} reservation:<nil>}
I0904 06:42:40.295351  877447 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400169e050}
I0904 06:42:40.295372  877447 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0904 06:42:40.295424  877447 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0904 06:42:40.351817  877447 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-842896 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-842896 --network=existing-network: (30.662755657s)
helpers_test.go:175: Cleaning up "existing-network-842896" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-842896
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-842896: (2.036429479s)
I0904 06:43:13.068700  877447 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.84s)

                                                
                                    
x
+
TestKicCustomSubnet (36.18s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-559701 --subnet=192.168.60.0/24
E0904 06:43:27.706449  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/addons-903438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-559701 --subnet=192.168.60.0/24: (34.009345946s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-559701 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-559701" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-559701
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-559701: (2.146158485s)
--- PASS: TestKicCustomSubnet (36.18s)

                                                
                                    
x
+
TestKicStaticIP (32.73s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-049834 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-049834 --static-ip=192.168.200.200: (30.449611942s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-049834 ip
helpers_test.go:175: Cleaning up "static-ip-049834" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-049834
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-049834: (2.115942751s)
--- PASS: TestKicStaticIP (32.73s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (73.47s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-538522 --driver=docker  --container-runtime=containerd
E0904 06:44:50.773188  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/addons-903438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-538522 --driver=docker  --container-runtime=containerd: (32.507757268s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-541131 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-541131 --driver=docker  --container-runtime=containerd: (35.583757706s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-538522
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-541131
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-541131" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-541131
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-541131: (2.022229091s)
helpers_test.go:175: Cleaning up "first-538522" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-538522
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-538522: (1.957163734s)
--- PASS: TestMinikubeProfile (73.47s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.86s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-364150 --memory=3072 --mount-string /tmp/TestMountStartserial459443412/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-364150 --memory=3072 --mount-string /tmp/TestMountStartserial459443412/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.855112588s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-364150 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-366292 --memory=3072 --mount-string /tmp/TestMountStartserial459443412/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-366292 --memory=3072 --mount-string /tmp/TestMountStartserial459443412/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.039538788s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-366292 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-364150 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-364150 --alsologtostderr -v=5: (1.622258642s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-366292 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-366292
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-366292: (1.205570048s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.53s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-366292
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-366292: (6.531218411s)
--- PASS: TestMountStart/serial/RestartStopped (7.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-366292 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (105.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-691969 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-691969 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m45.219831787s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (105.73s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (20.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-691969 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-691969 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-691969 -- rollout status deployment/busybox: (18.294957446s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-691969 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-691969 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-691969 -- exec busybox-7b57f96db7-dx2nt -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-691969 -- exec busybox-7b57f96db7-fwg4q -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-691969 -- exec busybox-7b57f96db7-dx2nt -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-691969 -- exec busybox-7b57f96db7-fwg4q -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-691969 -- exec busybox-7b57f96db7-dx2nt -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-691969 -- exec busybox-7b57f96db7-fwg4q -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (20.31s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-691969 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-691969 -- exec busybox-7b57f96db7-dx2nt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-691969 -- exec busybox-7b57f96db7-dx2nt -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-691969 -- exec busybox-7b57f96db7-fwg4q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-691969 -- exec busybox-7b57f96db7-fwg4q -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (13.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-691969 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-691969 -v=5 --alsologtostderr: (12.407439564s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (13.26s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-691969 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0904 06:48:27.706709  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/addons-903438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 cp testdata/cp-test.txt multinode-691969:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 ssh -n multinode-691969 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 cp multinode-691969:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2436580911/001/cp-test_multinode-691969.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 ssh -n multinode-691969 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 cp multinode-691969:/home/docker/cp-test.txt multinode-691969-m02:/home/docker/cp-test_multinode-691969_multinode-691969-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 ssh -n multinode-691969 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 ssh -n multinode-691969-m02 "sudo cat /home/docker/cp-test_multinode-691969_multinode-691969-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 cp multinode-691969:/home/docker/cp-test.txt multinode-691969-m03:/home/docker/cp-test_multinode-691969_multinode-691969-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 ssh -n multinode-691969 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 ssh -n multinode-691969-m03 "sudo cat /home/docker/cp-test_multinode-691969_multinode-691969-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 cp testdata/cp-test.txt multinode-691969-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 ssh -n multinode-691969-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 cp multinode-691969-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2436580911/001/cp-test_multinode-691969-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 ssh -n multinode-691969-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 cp multinode-691969-m02:/home/docker/cp-test.txt multinode-691969:/home/docker/cp-test_multinode-691969-m02_multinode-691969.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 ssh -n multinode-691969-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 ssh -n multinode-691969 "sudo cat /home/docker/cp-test_multinode-691969-m02_multinode-691969.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 cp multinode-691969-m02:/home/docker/cp-test.txt multinode-691969-m03:/home/docker/cp-test_multinode-691969-m02_multinode-691969-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 ssh -n multinode-691969-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 ssh -n multinode-691969-m03 "sudo cat /home/docker/cp-test_multinode-691969-m02_multinode-691969-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 cp testdata/cp-test.txt multinode-691969-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 ssh -n multinode-691969-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 cp multinode-691969-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2436580911/001/cp-test_multinode-691969-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 ssh -n multinode-691969-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 cp multinode-691969-m03:/home/docker/cp-test.txt multinode-691969:/home/docker/cp-test_multinode-691969-m03_multinode-691969.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 ssh -n multinode-691969-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 ssh -n multinode-691969 "sudo cat /home/docker/cp-test_multinode-691969-m03_multinode-691969.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 cp multinode-691969-m03:/home/docker/cp-test.txt multinode-691969-m02:/home/docker/cp-test_multinode-691969-m03_multinode-691969-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 ssh -n multinode-691969-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 ssh -n multinode-691969-m02 "sudo cat /home/docker/cp-test_multinode-691969-m03_multinode-691969-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-691969 node stop m03: (1.203864719s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-691969 status: exit status 7 (515.984791ms)

                                                
                                                
-- stdout --
	multinode-691969
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-691969-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-691969-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-691969 status --alsologtostderr: exit status 7 (543.434014ms)

                                                
                                                
-- stdout --
	multinode-691969
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-691969-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-691969-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 06:48:39.650154 1004367 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:48:39.650275 1004367 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:48:39.650284 1004367 out.go:374] Setting ErrFile to fd 2...
	I0904 06:48:39.650289 1004367 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:48:39.650526 1004367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-875589/.minikube/bin
	I0904 06:48:39.650713 1004367 out.go:368] Setting JSON to false
	I0904 06:48:39.650763 1004367 mustload.go:65] Loading cluster: multinode-691969
	I0904 06:48:39.650832 1004367 notify.go:220] Checking for updates...
	I0904 06:48:39.651748 1004367 config.go:182] Loaded profile config "multinode-691969": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0904 06:48:39.651777 1004367 status.go:174] checking status of multinode-691969 ...
	I0904 06:48:39.652318 1004367 cli_runner.go:164] Run: docker container inspect multinode-691969 --format={{.State.Status}}
	I0904 06:48:39.673219 1004367 status.go:371] multinode-691969 host status = "Running" (err=<nil>)
	I0904 06:48:39.673244 1004367 host.go:66] Checking if "multinode-691969" exists ...
	I0904 06:48:39.673561 1004367 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-691969
	I0904 06:48:39.701005 1004367 host.go:66] Checking if "multinode-691969" exists ...
	I0904 06:48:39.701428 1004367 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 06:48:39.701482 1004367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-691969
	I0904 06:48:39.720678 1004367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34019 SSHKeyPath:/home/jenkins/minikube-integration/21409-875589/.minikube/machines/multinode-691969/id_rsa Username:docker}
	I0904 06:48:39.810856 1004367 ssh_runner.go:195] Run: systemctl --version
	I0904 06:48:39.815159 1004367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 06:48:39.827475 1004367 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:48:39.886322 1004367 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-09-04 06:48:39.876714058 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0904 06:48:39.886912 1004367 kubeconfig.go:125] found "multinode-691969" server: "https://192.168.67.2:8443"
	I0904 06:48:39.886952 1004367 api_server.go:166] Checking apiserver status ...
	I0904 06:48:39.886998 1004367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 06:48:39.898855 1004367 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1493/cgroup
	I0904 06:48:39.908619 1004367 api_server.go:182] apiserver freezer: "10:freezer:/docker/070e7f3c876b191da615a44196b17b6f4232468b9f688b059db4548015b76222/kubepods/burstable/pod063b4638466c9db327af395aecbff270/5f1e8ea0fb23fa2e9a423a159aa49d23fec37a6288daeee70dae7a21e63901ce"
	I0904 06:48:39.908694 1004367 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/070e7f3c876b191da615a44196b17b6f4232468b9f688b059db4548015b76222/kubepods/burstable/pod063b4638466c9db327af395aecbff270/5f1e8ea0fb23fa2e9a423a159aa49d23fec37a6288daeee70dae7a21e63901ce/freezer.state
	I0904 06:48:39.918641 1004367 api_server.go:204] freezer state: "THAWED"
	I0904 06:48:39.918670 1004367 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0904 06:48:39.927071 1004367 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0904 06:48:39.927098 1004367 status.go:463] multinode-691969 apiserver status = Running (err=<nil>)
	I0904 06:48:39.927108 1004367 status.go:176] multinode-691969 status: &{Name:multinode-691969 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 06:48:39.927124 1004367 status.go:174] checking status of multinode-691969-m02 ...
	I0904 06:48:39.927431 1004367 cli_runner.go:164] Run: docker container inspect multinode-691969-m02 --format={{.State.Status}}
	I0904 06:48:39.944660 1004367 status.go:371] multinode-691969-m02 host status = "Running" (err=<nil>)
	I0904 06:48:39.944682 1004367 host.go:66] Checking if "multinode-691969-m02" exists ...
	I0904 06:48:39.944995 1004367 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-691969-m02
	I0904 06:48:39.962084 1004367 host.go:66] Checking if "multinode-691969-m02" exists ...
	I0904 06:48:39.962392 1004367 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 06:48:39.962436 1004367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-691969-m02
	I0904 06:48:39.979079 1004367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34024 SSHKeyPath:/home/jenkins/minikube-integration/21409-875589/.minikube/machines/multinode-691969-m02/id_rsa Username:docker}
	I0904 06:48:40.098992 1004367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 06:48:40.113661 1004367 status.go:176] multinode-691969-m02 status: &{Name:multinode-691969-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0904 06:48:40.113697 1004367 status.go:174] checking status of multinode-691969-m03 ...
	I0904 06:48:40.114074 1004367 cli_runner.go:164] Run: docker container inspect multinode-691969-m03 --format={{.State.Status}}
	I0904 06:48:40.133712 1004367 status.go:371] multinode-691969-m03 host status = "Stopped" (err=<nil>)
	I0904 06:48:40.133741 1004367 status.go:384] host is not running, skipping remaining checks
	I0904 06:48:40.133749 1004367 status.go:176] multinode-691969-m03 status: &{Name:multinode-691969-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-691969 node start m03 -v=5 --alsologtostderr: (7.092603267s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-691969
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-691969
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-691969: (25.006121269s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-691969 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-691969 --wait=true -v=5 --alsologtostderr: (54.230516978s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-691969
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-691969 node delete m03: (4.931472582s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.60s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-691969 stop: (23.831763974s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-691969 status: exit status 7 (96.402031ms)

                                                
                                                
-- stdout --
	multinode-691969
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-691969-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-691969 status --alsologtostderr: exit status 7 (92.481024ms)

                                                
                                                
-- stdout --
	multinode-691969
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-691969-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 06:50:36.951501 1013080 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:50:36.951618 1013080 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:50:36.951630 1013080 out.go:374] Setting ErrFile to fd 2...
	I0904 06:50:36.951635 1013080 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:50:36.951899 1013080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-875589/.minikube/bin
	I0904 06:50:36.952107 1013080 out.go:368] Setting JSON to false
	I0904 06:50:36.952152 1013080 mustload.go:65] Loading cluster: multinode-691969
	I0904 06:50:36.952232 1013080 notify.go:220] Checking for updates...
	I0904 06:50:36.953658 1013080 config.go:182] Loaded profile config "multinode-691969": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0904 06:50:36.953694 1013080 status.go:174] checking status of multinode-691969 ...
	I0904 06:50:36.954470 1013080 cli_runner.go:164] Run: docker container inspect multinode-691969 --format={{.State.Status}}
	I0904 06:50:36.972418 1013080 status.go:371] multinode-691969 host status = "Stopped" (err=<nil>)
	I0904 06:50:36.972439 1013080 status.go:384] host is not running, skipping remaining checks
	I0904 06:50:36.972447 1013080 status.go:176] multinode-691969 status: &{Name:multinode-691969 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 06:50:36.972478 1013080 status.go:174] checking status of multinode-691969-m02 ...
	I0904 06:50:36.972797 1013080 cli_runner.go:164] Run: docker container inspect multinode-691969-m02 --format={{.State.Status}}
	I0904 06:50:36.994636 1013080 status.go:371] multinode-691969-m02 host status = "Stopped" (err=<nil>)
	I0904 06:50:36.994656 1013080 status.go:384] host is not running, skipping remaining checks
	I0904 06:50:36.994663 1013080 status.go:176] multinode-691969-m02 status: &{Name:multinode-691969-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-691969 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E0904 06:51:05.440493  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/functional-037768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-691969 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (49.759263847s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-691969 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.44s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-691969
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-691969-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-691969-m02 --driver=docker  --container-runtime=containerd: exit status 14 (95.034536ms)

                                                
                                                
-- stdout --
	* [multinode-691969-m02] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-875589/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-875589/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-691969-m02' is duplicated with machine name 'multinode-691969-m02' in profile 'multinode-691969'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-691969-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-691969-m03 --driver=docker  --container-runtime=containerd: (33.081686045s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-691969
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-691969: exit status 80 (337.937127ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-691969 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-691969-m03 already exists in multinode-691969-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-691969-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-691969-m03: (1.968894516s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.54s)

                                                
                                    
x
+
TestPreload (141.89s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-183493 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
E0904 06:52:28.506748  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/functional-037768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-183493 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (1m13.234631748s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-183493 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-183493 image pull gcr.io/k8s-minikube/busybox: (2.374186467s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-183493
E0904 06:53:27.706487  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/addons-903438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-183493: (5.793642235s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-183493 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-183493 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (57.90920834s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-183493 image list
helpers_test.go:175: Cleaning up "test-preload-183493" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-183493
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-183493: (2.339943163s)
--- PASS: TestPreload (141.89s)

                                                
                                    
x
+
TestScheduledStopUnix (110.02s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-832324 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-832324 --memory=3072 --driver=docker  --container-runtime=containerd: (34.193419721s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-832324 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-832324 -n scheduled-stop-832324
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-832324 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0904 06:55:03.640780  877447 retry.go:31] will retry after 107.46µs: open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/scheduled-stop-832324/pid: no such file or directory
I0904 06:55:03.645157  877447 retry.go:31] will retry after 124.323µs: open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/scheduled-stop-832324/pid: no such file or directory
I0904 06:55:03.646314  877447 retry.go:31] will retry after 208.93µs: open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/scheduled-stop-832324/pid: no such file or directory
I0904 06:55:03.647455  877447 retry.go:31] will retry after 449.455µs: open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/scheduled-stop-832324/pid: no such file or directory
I0904 06:55:03.648584  877447 retry.go:31] will retry after 493.783µs: open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/scheduled-stop-832324/pid: no such file or directory
I0904 06:55:03.649708  877447 retry.go:31] will retry after 618.415µs: open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/scheduled-stop-832324/pid: no such file or directory
I0904 06:55:03.650835  877447 retry.go:31] will retry after 679.052µs: open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/scheduled-stop-832324/pid: no such file or directory
I0904 06:55:03.651959  877447 retry.go:31] will retry after 2.318475ms: open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/scheduled-stop-832324/pid: no such file or directory
I0904 06:55:03.655177  877447 retry.go:31] will retry after 3.486392ms: open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/scheduled-stop-832324/pid: no such file or directory
I0904 06:55:03.659493  877447 retry.go:31] will retry after 3.767961ms: open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/scheduled-stop-832324/pid: no such file or directory
I0904 06:55:03.663726  877447 retry.go:31] will retry after 7.234068ms: open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/scheduled-stop-832324/pid: no such file or directory
I0904 06:55:03.672011  877447 retry.go:31] will retry after 7.745977ms: open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/scheduled-stop-832324/pid: no such file or directory
I0904 06:55:03.680313  877447 retry.go:31] will retry after 15.754112ms: open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/scheduled-stop-832324/pid: no such file or directory
I0904 06:55:03.696533  877447 retry.go:31] will retry after 15.594336ms: open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/scheduled-stop-832324/pid: no such file or directory
I0904 06:55:03.712780  877447 retry.go:31] will retry after 18.324322ms: open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/scheduled-stop-832324/pid: no such file or directory
I0904 06:55:03.732181  877447 retry.go:31] will retry after 26.596143ms: open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/scheduled-stop-832324/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-832324 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-832324 -n scheduled-stop-832324
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-832324
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-832324 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0904 06:56:05.447554  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/functional-037768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-832324
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-832324: exit status 7 (64.133663ms)

                                                
                                                
-- stdout --
	scheduled-stop-832324
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-832324 -n scheduled-stop-832324
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-832324 -n scheduled-stop-832324: exit status 7 (68.707205ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-832324" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-832324
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-832324: (4.263423911s)
--- PASS: TestScheduledStopUnix (110.02s)

                                                
                                    
x
+
TestInsufficientStorage (9.79s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-661667 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-661667 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.310609459s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"65b6e24c-d1a6-442a-be0e-4598b550827f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-661667] minikube v1.36.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f675e9c3-1abb-46fd-a304-8d28e3adff6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21409"}}
	{"specversion":"1.0","id":"0a7904c9-9e6e-4571-aa9a-5b0e0b8cfe1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"773231dc-034f-4cdd-8c0b-8acb9e9a6efe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21409-875589/kubeconfig"}}
	{"specversion":"1.0","id":"659363e6-e1ea-4425-8325-c073032fff74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-875589/.minikube"}}
	{"specversion":"1.0","id":"e3f5a3e7-7ac4-4308-9821-521683375cb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"4fd6c0a1-7fa7-45a2-84e3-98033f471e10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fdeec3f0-2cb9-49d3-aea1-1445de8a8eb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"508dd631-62f6-4a18-af34-f3a38178c418","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"b9fa676e-b6b4-49ae-a1d7-4e66be681975","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f7cff3c0-a5ce-4340-b3b8-b2dd014259e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"e0556db6-f51f-41ed-9a6f-8a9a764ff801","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-661667\" primary control-plane node in \"insufficient-storage-661667\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"122dc623-1f4d-4d5c-8906-f996f405410b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.47-1756936034-21409 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"58aa41ea-34b8-47c9-9b02-7cafc8ac0dc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"cc169465-5e1b-4b19-b016-8070ac772b86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-661667 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-661667 --output=json --layout=cluster: exit status 7 (293.393665ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-661667","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-661667","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0904 06:56:26.533947 1032107 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-661667" does not appear in /home/jenkins/minikube-integration/21409-875589/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-661667 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-661667 --output=json --layout=cluster: exit status 7 (285.093085ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-661667","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-661667","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0904 06:56:26.818850 1032169 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-661667" does not appear in /home/jenkins/minikube-integration/21409-875589/kubeconfig
	E0904 06:56:26.829193 1032169 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/insufficient-storage-661667/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-661667" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-661667
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-661667: (1.903541383s)
--- PASS: TestInsufficientStorage (9.79s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (63.77s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1088326500 start -p running-upgrade-583155 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1088326500 start -p running-upgrade-583155 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (33.570443694s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-583155 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-583155 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (27.159811234s)
helpers_test.go:175: Cleaning up "running-upgrade-583155" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-583155
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-583155: (2.355053436s)
--- PASS: TestRunningBinaryUpgrade (63.77s)

                                                
                                    
x
+
TestKubernetesUpgrade (350.01s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-159338 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-159338 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (35.399043457s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-159338
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-159338: (1.232165939s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-159338 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-159338 status --format={{.Host}}: exit status 7 (69.382355ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-159338 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0904 06:58:27.707211  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/addons-903438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-159338 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m55.705443031s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-159338 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-159338 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-159338 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (113.710214ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-159338] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-875589/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-875589/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-159338
	    minikube start -p kubernetes-upgrade-159338 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1593382 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-159338 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-159338 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0904 07:03:27.707143  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/addons-903438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-159338 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (14.903076011s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-159338" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-159338
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-159338: (2.467535622s)
--- PASS: TestKubernetesUpgrade (350.01s)

                                                
                                    
x
+
TestMissingContainerUpgrade (141.77s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3332189777 start -p missing-upgrade-779459 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3332189777 start -p missing-upgrade-779459 --memory=3072 --driver=docker  --container-runtime=containerd: (1m4.958990843s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-779459
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-779459
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-779459 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-779459 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m12.984164022s)
helpers_test.go:175: Cleaning up "missing-upgrade-779459" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-779459
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-779459: (2.150681792s)
--- PASS: TestMissingContainerUpgrade (141.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-115146 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-115146 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (90.221675ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-115146] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-875589/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-875589/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-115146 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-115146 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (40.551692348s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-115146 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-115146 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-115146 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (15.779439624s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-115146 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-115146 status -o json: exit status 2 (430.30655ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-115146","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-115146
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-115146: (2.141132438s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-115146 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-115146 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (7.289702686s)
--- PASS: TestNoKubernetes/serial/Start (7.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-115146 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-115146 "sudo systemctl is-active --quiet service kubelet": exit status 1 (384.446137ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-115146
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-115146: (1.210770448s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-115146 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-115146 --driver=docker  --container-runtime=containerd: (6.767865092s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-115146 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-115146 "sudo systemctl is-active --quiet service kubelet": exit status 1 (271.059109ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (57.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.595581782 start -p stopped-upgrade-453629 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.595581782 start -p stopped-upgrade-453629 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (33.927252158s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.595581782 -p stopped-upgrade-453629 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.595581782 -p stopped-upgrade-453629 stop: (1.240339503s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-453629 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-453629 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (21.854586872s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (57.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-453629
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-453629: (1.503485052s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.50s)

                                                
                                    
x
+
TestPause/serial/Start (53.99s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-003297 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E0904 07:01:05.442288  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/functional-037768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:01:30.775642  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/addons-903438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-003297 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (53.990450688s)
--- PASS: TestPause/serial/Start (53.99s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.27s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-003297 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-003297 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.246512139s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.27s)

                                                
                                    
x
+
TestPause/serial/Pause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-003297 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.73s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-003297 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-003297 --output=json --layout=cluster: exit status 2 (330.235513ms)

                                                
                                                
-- stdout --
	{"Name":"pause-003297","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-003297","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-003297 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.74s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.12s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-003297 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-003297 --alsologtostderr -v=5: (1.121023201s)
--- PASS: TestPause/serial/PauseAgain (1.12s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.87s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-003297 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-003297 --alsologtostderr -v=5: (2.87103694s)
--- PASS: TestPause/serial/DeletePaused (2.87s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (14.87s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (14.80107176s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-003297
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-003297: exit status 1 (17.014677ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-003297: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (14.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-251723 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-251723 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (243.015238ms)

                                                
                                                
-- stdout --
	* [false-251723] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-875589/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-875589/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 07:03:05.302049 1071001 out.go:360] Setting OutFile to fd 1 ...
	I0904 07:03:05.302435 1071001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 07:03:05.302445 1071001 out.go:374] Setting ErrFile to fd 2...
	I0904 07:03:05.302454 1071001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 07:03:05.302704 1071001 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-875589/.minikube/bin
	I0904 07:03:05.303140 1071001 out.go:368] Setting JSON to false
	I0904 07:03:05.304015 1071001 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":17135,"bootTime":1756952251,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0904 07:03:05.304091 1071001 start.go:140] virtualization:  
	I0904 07:03:05.308739 1071001 out.go:179] * [false-251723] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0904 07:03:05.311884 1071001 out.go:179]   - MINIKUBE_LOCATION=21409
	I0904 07:03:05.312153 1071001 notify.go:220] Checking for updates...
	I0904 07:03:05.318866 1071001 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 07:03:05.321837 1071001 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-875589/kubeconfig
	I0904 07:03:05.324955 1071001 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-875589/.minikube
	I0904 07:03:05.328042 1071001 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0904 07:03:05.330934 1071001 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 07:03:05.334486 1071001 config.go:182] Loaded profile config "kubernetes-upgrade-159338": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0904 07:03:05.334647 1071001 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 07:03:05.367014 1071001 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 07:03:05.367174 1071001 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 07:03:05.462035 1071001 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-04 07:03:05.452458289 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0904 07:03:05.462158 1071001 docker.go:318] overlay module found
	I0904 07:03:05.465482 1071001 out.go:179] * Using the docker driver based on user configuration
	I0904 07:03:05.468365 1071001 start.go:304] selected driver: docker
	I0904 07:03:05.468378 1071001 start.go:918] validating driver "docker" against <nil>
	I0904 07:03:05.468393 1071001 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 07:03:05.471940 1071001 out.go:203] 
	W0904 07:03:05.474804 1071001 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0904 07:03:05.477707 1071001 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-251723 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-251723

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-251723

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-251723

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-251723

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-251723

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-251723

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-251723

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-251723

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-251723

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-251723

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-251723

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-251723" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-251723" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-251723" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-251723" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-251723" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-251723" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-251723" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-251723" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-251723" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-251723" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-251723" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21409-875589/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 04 Sep 2025 06:58:37 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-159338
contexts:
- context:
cluster: kubernetes-upgrade-159338
user: kubernetes-upgrade-159338
name: kubernetes-upgrade-159338
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-159338
user:
client-certificate: /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/kubernetes-upgrade-159338/client.crt
client-key: /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/kubernetes-upgrade-159338/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-251723

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-251723"

                                                
                                                
----------------------- debugLogs end: false-251723 [took: 5.522807959s] --------------------------------
helpers_test.go:175: Cleaning up "false-251723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-251723
--- PASS: TestNetworkPlugins/group/false (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (71.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-723430 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-723430 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m11.142702457s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (71.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-723430 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [14e5a04a-5161-480d-a9bc-9d180f73e6ee] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [14e5a04a-5161-480d-a9bc-9d180f73e6ee] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.00389702s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-723430 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-723430 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-723430 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.091735327s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-723430 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-723430 --alsologtostderr -v=3
E0904 07:06:05.443547  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/functional-037768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-723430 --alsologtostderr -v=3: (12.114822411s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-723430 -n old-k8s-version-723430
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-723430 -n old-k8s-version-723430: exit status 7 (79.809621ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-723430 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (49.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-723430 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-723430 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (48.92399319s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-723430 -n old-k8s-version-723430
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (49.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-fbw4r" [d794c7bb-2498-4ff8-9f9b-842fb71a5608] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00449106s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-fbw4r" [d794c7bb-2498-4ff8-9f9b-842fb71a5608] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003755354s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-723430 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-723430 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-723430 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-723430 -n old-k8s-version-723430
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-723430 -n old-k8s-version-723430: exit status 2 (331.299332ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-723430 -n old-k8s-version-723430
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-723430 -n old-k8s-version-723430: exit status 2 (318.276792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-723430 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-723430 -n old-k8s-version-723430
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-723430 -n old-k8s-version-723430
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (88.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-092128 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-092128 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (1m28.090870679s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (88.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (67.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-380710 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
E0904 07:08:27.707199  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/addons-903438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-380710 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (1m7.590661015s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (67.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-380710 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [97a46274-0629-4652-aa9b-183cc1bad511] Pending
helpers_test.go:352: "busybox" [97a46274-0629-4652-aa9b-183cc1bad511] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [97a46274-0629-4652-aa9b-183cc1bad511] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.005345775s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-380710 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-380710 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-380710 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.105485002s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-380710 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-380710 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-380710 --alsologtostderr -v=3: (12.192263939s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-092128 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9469e344-97a7-4168-a8cb-a0e6f6124eb0] Pending
helpers_test.go:352: "busybox" [9469e344-97a7-4168-a8cb-a0e6f6124eb0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9469e344-97a7-4168-a8cb-a0e6f6124eb0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003931407s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-092128 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-092128 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-092128 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-092128 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-092128 --alsologtostderr -v=3: (12.413440474s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-380710 -n embed-certs-380710
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-380710 -n embed-certs-380710: exit status 7 (139.183092ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-380710 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-380710 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
E0904 07:09:08.508832  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/functional-037768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-380710 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (51.122688543s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-380710 -n embed-certs-380710
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-092128 -n no-preload-092128
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-092128 -n no-preload-092128: exit status 7 (104.471107ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-092128 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (56.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-092128 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-092128 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (55.997813629s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-092128 -n no-preload-092128
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (56.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4w6pg" [8a1f457b-a0ff-4e60-b08f-87702544245f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003567599s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4w6pg" [8a1f457b-a0ff-4e60-b08f-87702544245f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004931846s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-380710 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-380710 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-380710 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-380710 -n embed-certs-380710
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-380710 -n embed-certs-380710: exit status 2 (352.632296ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-380710 -n embed-certs-380710
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-380710 -n embed-certs-380710: exit status 2 (348.670985ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-380710 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-380710 -n embed-certs-380710
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-380710 -n embed-certs-380710
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hrxkn" [9aa8b1b3-c3d6-4085-93c4-466cd1dd3cca] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003535293s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (100.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-057563 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-057563 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (1m40.43193224s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (100.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hrxkn" [9aa8b1b3-c3d6-4085-93c4-466cd1dd3cca] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0031359s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-092128 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-092128 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-092128 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-092128 --alsologtostderr -v=1: (1.090396261s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-092128 -n no-preload-092128
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-092128 -n no-preload-092128: exit status 2 (422.637449ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-092128 -n no-preload-092128
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-092128 -n no-preload-092128: exit status 2 (426.236005ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-092128 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-092128 -n no-preload-092128
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-092128 -n no-preload-092128
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-501126 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
E0904 07:10:48.711329  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/old-k8s-version-723430/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:48.717635  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/old-k8s-version-723430/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:48.728952  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/old-k8s-version-723430/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:48.750594  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/old-k8s-version-723430/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:48.791904  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/old-k8s-version-723430/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:48.873202  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/old-k8s-version-723430/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:49.034498  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/old-k8s-version-723430/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:49.356366  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/old-k8s-version-723430/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:49.997640  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/old-k8s-version-723430/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:51.279749  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/old-k8s-version-723430/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:53.842002  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/old-k8s-version-723430/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:58.964435  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/old-k8s-version-723430/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:11:05.440926  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/functional-037768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:11:09.206271  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/old-k8s-version-723430/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-501126 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (43.871718466s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-501126 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-501126 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.003873229s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-501126 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-501126 --alsologtostderr -v=3: (1.258502469s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-501126 -n newest-cni-501126
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-501126 -n newest-cni-501126: exit status 7 (78.573991ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-501126 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-501126 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-501126 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (15.586130644s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-501126 -n newest-cni-501126
E0904 07:11:29.688421  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/old-k8s-version-723430/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-501126 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-501126 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-501126 --alsologtostderr -v=1: (1.015258555s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-501126 -n newest-cni-501126
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-501126 -n newest-cni-501126: exit status 2 (330.321219ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-501126 -n newest-cni-501126
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-501126 -n newest-cni-501126: exit status 2 (356.029087ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-501126 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-501126 -n newest-cni-501126
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-501126 -n newest-cni-501126
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (51.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-251723 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-251723 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (51.090014804s)
--- PASS: TestNetworkPlugins/group/auto/Start (51.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-057563 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a6e3e4fb-7c66-4325-aed3-1b2bff31d89c] Pending
helpers_test.go:352: "busybox" [a6e3e4fb-7c66-4325-aed3-1b2bff31d89c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a6e3e4fb-7c66-4325-aed3-1b2bff31d89c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004234525s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-057563 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-057563 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-057563 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.388293028s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-057563 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-057563 --alsologtostderr -v=3
E0904 07:12:10.649871  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/old-k8s-version-723430/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-057563 --alsologtostderr -v=3: (12.32214813s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-057563 -n default-k8s-diff-port-057563
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-057563 -n default-k8s-diff-port-057563: exit status 7 (78.851832ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-057563 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-057563 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-057563 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (51.547104932s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-057563 -n default-k8s-diff-port-057563
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-251723 "pgrep -a kubelet"
I0904 07:12:27.075431  877447 config.go:182] Loaded profile config "auto-251723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-251723 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6kggn" [4771c800-df9d-4b81-822e-8a9036820ea9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6kggn" [4771c800-df9d-4b81-822e-8a9036820ea9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.00426136s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-251723 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-251723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-251723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (99.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-251723 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-251723 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m39.789178454s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (99.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-m2pzc" [e2576148-6662-4bc0-8ef8-65b9b1770b80] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003762389s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-m2pzc" [e2576148-6662-4bc0-8ef8-65b9b1770b80] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005022669s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-057563 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-057563 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-057563 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-057563 --alsologtostderr -v=1: (1.022544693s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-057563 -n default-k8s-diff-port-057563
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-057563 -n default-k8s-diff-port-057563: exit status 2 (401.149135ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-057563 -n default-k8s-diff-port-057563
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-057563 -n default-k8s-diff-port-057563: exit status 2 (414.052702ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-057563 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-057563 -n default-k8s-diff-port-057563
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-057563 -n default-k8s-diff-port-057563
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (57.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-251723 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0904 07:13:27.706816  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/addons-903438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:13:32.571273  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/old-k8s-version-723430/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:13:48.036876  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/no-preload-092128/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:13:48.043252  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/no-preload-092128/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:13:48.054665  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/no-preload-092128/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:13:48.076035  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/no-preload-092128/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:13:48.117421  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/no-preload-092128/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:13:48.199333  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/no-preload-092128/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:13:48.360824  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/no-preload-092128/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:13:48.682947  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/no-preload-092128/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:13:49.324912  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/no-preload-092128/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:13:50.606724  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/no-preload-092128/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:13:53.169305  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/no-preload-092128/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:13:58.290909  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/no-preload-092128/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:14:08.532770  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/no-preload-092128/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-251723 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (57.456688171s)
--- PASS: TestNetworkPlugins/group/calico/Start (57.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-mbbw8" [5cbe1d34-5434-4183-8c56-49069b319326] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-mbbw8" [5cbe1d34-5434-4183-8c56-49069b319326] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003651438s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-251723 "pgrep -a kubelet"
E0904 07:14:29.014998  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/no-preload-092128/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I0904 07:14:29.194627  877447 config.go:182] Loaded profile config "calico-251723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-251723 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-t4qkw" [dacc9c4b-329b-48f4-9e32-2f774177e0a6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-t4qkw" [dacc9c4b-329b-48f4-9e32-2f774177e0a6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003491589s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-251723 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-251723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-251723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-csgrt" [7f6ae680-5164-4eaf-880d-48570bbf787b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003637991s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-251723 "pgrep -a kubelet"
I0904 07:14:47.599810  877447 config.go:182] Loaded profile config "kindnet-251723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-251723 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dxxcr" [3482fcd0-edb0-488a-8b25-c65c9b92d667] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dxxcr" [3482fcd0-edb0-488a-8b25-c65c9b92d667] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.011802335s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-251723 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-251723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-251723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (63.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-251723 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0904 07:15:09.976375  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/no-preload-092128/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-251723 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m3.761324972s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (63.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (50.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-251723 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0904 07:15:48.710364  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/old-k8s-version-723430/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:16:05.440696  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/functional-037768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-251723 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (50.811858724s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (50.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-251723 "pgrep -a kubelet"
I0904 07:16:07.580216  877447 config.go:182] Loaded profile config "custom-flannel-251723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-251723 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-sz68l" [3ab7d65f-c8da-4932-a0cf-b75dd1fc73f6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-sz68l" [3ab7d65f-c8da-4932-a0cf-b75dd1fc73f6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004785754s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-251723 "pgrep -a kubelet"
I0904 07:16:15.078617  877447 config.go:182] Loaded profile config "enable-default-cni-251723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-251723 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xjsm4" [c7c2ea81-0c81-4ab1-a3cf-b952ea84b94c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0904 07:16:16.413092  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/old-k8s-version-723430/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-xjsm4" [c7c2ea81-0c81-4ab1-a3cf-b952ea84b94c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003609919s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-251723 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-251723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-251723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-251723 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-251723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-251723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (65.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-251723 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-251723 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m5.485038718s)
--- PASS: TestNetworkPlugins/group/flannel/Start (65.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (52.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-251723 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E0904 07:16:50.161199  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/default-k8s-diff-port-057563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:16:50.167552  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/default-k8s-diff-port-057563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:16:50.178932  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/default-k8s-diff-port-057563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:16:50.200761  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/default-k8s-diff-port-057563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:16:50.242120  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/default-k8s-diff-port-057563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:16:50.324502  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/default-k8s-diff-port-057563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:16:50.490495  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/default-k8s-diff-port-057563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:16:50.813960  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/default-k8s-diff-port-057563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:16:51.456170  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/default-k8s-diff-port-057563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:16:52.737532  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/default-k8s-diff-port-057563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:16:55.299698  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/default-k8s-diff-port-057563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:17:00.421784  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/default-k8s-diff-port-057563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:17:10.663563  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/default-k8s-diff-port-057563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:17:27.501755  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/auto-251723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:17:27.508067  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/auto-251723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:17:27.520043  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/auto-251723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:17:27.541398  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/auto-251723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:17:27.582774  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/auto-251723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:17:27.664202  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/auto-251723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:17:27.825593  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/auto-251723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:17:28.147338  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/auto-251723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:17:28.789259  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/auto-251723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:17:30.071204  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/auto-251723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:17:31.145275  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/default-k8s-diff-port-057563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:17:32.633006  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/auto-251723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:17:37.755232  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/auto-251723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-251723 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (52.478273285s)
--- PASS: TestNetworkPlugins/group/bridge/Start (52.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-251723 "pgrep -a kubelet"
I0904 07:17:42.806979  877447 config.go:182] Loaded profile config "bridge-251723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-251723 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gg498" [5e145ae6-3ce2-49ed-b541-8879416c3212] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gg498" [5e145ae6-3ce2-49ed-b541-8879416c3212] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003238503s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-475hs" [48633aa4-8528-4882-a87f-7a6d8b48dd08] Running
E0904 07:17:47.997531  877447 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/auto-251723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003272279s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-251723 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-251723 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-251723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-251723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
I0904 07:17:53.658693  877447 config.go:182] Loaded profile config "flannel-251723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-251723 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6tct9" [e1868857-27a0-4c94-9fae-839510fb16c0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6tct9" [e1868857-27a0-4c94-9fae-839510fb16c0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003782296s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-251723 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-251723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-251723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    

Test skip (30/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.57s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-984361 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-984361" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-984361
--- SKIP: TestDownloadOnlyKic (0.57s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-055031" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-055031
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-251723 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-251723

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-251723

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-251723

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-251723

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-251723

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-251723

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-251723

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-251723

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-251723

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-251723

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-251723

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-251723" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-251723" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-251723" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-251723" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-251723" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-251723" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-251723" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-251723" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-251723" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-251723" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-251723" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21409-875589/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 04 Sep 2025 06:58:37 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-159338
contexts:
- context:
cluster: kubernetes-upgrade-159338
user: kubernetes-upgrade-159338
name: kubernetes-upgrade-159338
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-159338
user:
client-certificate: /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/kubernetes-upgrade-159338/client.crt
client-key: /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/kubernetes-upgrade-159338/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-251723

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-251723"

                                                
                                                
----------------------- debugLogs end: kubenet-251723 [took: 4.387822342s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-251723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-251723
--- SKIP: TestNetworkPlugins/group/kubenet (4.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-251723 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-251723

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-251723

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-251723

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-251723

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-251723

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-251723

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-251723

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-251723

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-251723

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-251723

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-251723

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-251723" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-251723" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-251723" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-251723" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-251723" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-251723" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-251723" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-251723" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-251723

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-251723

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-251723" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-251723" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-251723

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-251723

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-251723" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-251723" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-251723" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-251723" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-251723" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21409-875589/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 04 Sep 2025 06:58:37 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-159338
contexts:
- context:
cluster: kubernetes-upgrade-159338
user: kubernetes-upgrade-159338
name: kubernetes-upgrade-159338
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-159338
user:
client-certificate: /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/kubernetes-upgrade-159338/client.crt
client-key: /home/jenkins/minikube-integration/21409-875589/.minikube/profiles/kubernetes-upgrade-159338/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-251723

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-251723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-251723"

                                                
                                                
----------------------- debugLogs end: cilium-251723 [took: 5.502733942s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-251723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-251723
--- SKIP: TestNetworkPlugins/group/cilium (5.72s)

                                                
                                    
Copied to clipboard