Test Report: Docker_Linux_containerd 21409

                    
                      0aa34a444c66e47b3763835c9f1ccee8527d3e22:2025-09-04:41274
                    
                

Test fail (1/332)

Order failed test Duration
54 TestDockerEnvContainerd 40.58
x
+
TestDockerEnvContainerd (40.58s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-217193 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-217193 --driver=docker  --container-runtime=containerd: (23.503905101s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-217193"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-OmrfoJOe2hNo/agent.417918" SSH_AGENT_PID="417919" DOCKER_HOST=ssh://docker@127.0.0.1:33148 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-OmrfoJOe2hNo/agent.417918" SSH_AGENT_PID="417919" DOCKER_HOST=ssh://docker@127.0.0.1:33148 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-OmrfoJOe2hNo/agent.417918" SSH_AGENT_PID="417919" DOCKER_HOST=ssh://docker@127.0.0.1:33148 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.860665406s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-OmrfoJOe2hNo/agent.417918" SSH_AGENT_PID="417919" DOCKER_HOST=ssh://docker@127.0.0.1:33148 docker image ls"
docker_test.go:250: (dbg) Non-zero exit: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-OmrfoJOe2hNo/agent.417918" SSH_AGENT_PID="417919" DOCKER_HOST=ssh://docker@127.0.0.1:33148 docker image ls": exit status 1 (529.260531ms)

                                                
                                                
** stderr ** 
	error during connect: Get "http://docker.example.com/v1.43/images/json": EOF

                                                
                                                
** /stderr **
docker_test.go:252: failed to execute 'docker image ls', error: exit status 1, output: 
** stderr ** 
	error during connect: Get "http://docker.example.com/v1.43/images/json": EOF

                                                
                                                
** /stderr **
panic.go:636: *** TestDockerEnvContainerd FAILED at 2025-09-04 04:19:24.581272614 +0000 UTC m=+380.672286365
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestDockerEnvContainerd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestDockerEnvContainerd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect dockerenv-217193
helpers_test.go:243: (dbg) docker inspect dockerenv-217193:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "10edcd9dc2fc2d4d2f2b61fad134e1f847ac65e15ced074bb5c43947c2311036",
	        "Created": "2025-09-04T04:18:52.690319926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 415001,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-04T04:18:52.718225643Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6f7d8b3ae805e64eb4efe058a75d43d384fe5989473cee7f8e24ea90eca28309",
	        "ResolvConfPath": "/var/lib/docker/containers/10edcd9dc2fc2d4d2f2b61fad134e1f847ac65e15ced074bb5c43947c2311036/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/10edcd9dc2fc2d4d2f2b61fad134e1f847ac65e15ced074bb5c43947c2311036/hostname",
	        "HostsPath": "/var/lib/docker/containers/10edcd9dc2fc2d4d2f2b61fad134e1f847ac65e15ced074bb5c43947c2311036/hosts",
	        "LogPath": "/var/lib/docker/containers/10edcd9dc2fc2d4d2f2b61fad134e1f847ac65e15ced074bb5c43947c2311036/10edcd9dc2fc2d4d2f2b61fad134e1f847ac65e15ced074bb5c43947c2311036-json.log",
	        "Name": "/dockerenv-217193",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "dockerenv-217193:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "dockerenv-217193",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 8388608000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 16777216000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "10edcd9dc2fc2d4d2f2b61fad134e1f847ac65e15ced074bb5c43947c2311036",
	                "LowerDir": "/var/lib/docker/overlay2/a39c394da3439a85d9718d89bfbc179c99178f57069bf7135f47b7ce95022d95-init/diff:/var/lib/docker/overlay2/0769bef7e3c5865cebf1e3a1be4e4b525196a05d5c3fd7786d90930088730419/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a39c394da3439a85d9718d89bfbc179c99178f57069bf7135f47b7ce95022d95/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a39c394da3439a85d9718d89bfbc179c99178f57069bf7135f47b7ce95022d95/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a39c394da3439a85d9718d89bfbc179c99178f57069bf7135f47b7ce95022d95/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "dockerenv-217193",
	                "Source": "/var/lib/docker/volumes/dockerenv-217193/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "dockerenv-217193",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "dockerenv-217193",
	                "name.minikube.sigs.k8s.io": "dockerenv-217193",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b16dc799c309b3d9121049ab468c3a106cbc3d8ed456bc4a7a7f076a21c86ce7",
	            "SandboxKey": "/var/run/docker/netns/b16dc799c309",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "dockerenv-217193": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:06:28:d2:6d:26",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "100c0a48728fb003ab653cd3e6fb7600eb8ad3871526f074c07473a0a245e798",
	                    "EndpointID": "64aaf0f95c21655319ff29d214407b93763c950a4edf3f28b25f257a56f4d3c8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "dockerenv-217193",
	                        "10edcd9dc2fc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p dockerenv-217193 -n dockerenv-217193
helpers_test.go:252: <<< TestDockerEnvContainerd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestDockerEnvContainerd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p dockerenv-217193 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p dockerenv-217193 logs -n 25: (1.021109141s)
helpers_test.go:260: TestDockerEnvContainerd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────┬─────────────────────────────────────────────────────────────────────────────────┬──────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND   │                                      ARGS                                       │     PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────┼─────────────────────────────────────────────────────────────────────────────────┼──────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons     │ addons-919243 addons disable storage-provisioner-rancher --alsologtostderr -v=1 │ addons-919243    │ jenkins │ v1.36.0 │ 04 Sep 25 04:17 UTC │ 04 Sep 25 04:18 UTC │
	│ addons     │ addons-919243 addons disable yakd --alsologtostderr -v=1                        │ addons-919243    │ jenkins │ v1.36.0 │ 04 Sep 25 04:17 UTC │ 04 Sep 25 04:17 UTC │
	│ ip         │ addons-919243 ip                                                                │ addons-919243    │ jenkins │ v1.36.0 │ 04 Sep 25 04:17 UTC │ 04 Sep 25 04:17 UTC │
	│ addons     │ addons-919243 addons disable registry --alsologtostderr -v=1                    │ addons-919243    │ jenkins │ v1.36.0 │ 04 Sep 25 04:17 UTC │ 04 Sep 25 04:17 UTC │
	│ addons     │ addons-919243 addons disable inspektor-gadget --alsologtostderr -v=1            │ addons-919243    │ jenkins │ v1.36.0 │ 04 Sep 25 04:17 UTC │ 04 Sep 25 04:17 UTC │
	│ addons     │ addons-919243 addons disable metrics-server --alsologtostderr -v=1              │ addons-919243    │ jenkins │ v1.36.0 │ 04 Sep 25 04:17 UTC │ 04 Sep 25 04:17 UTC │
	│ addons     │ addons-919243 addons disable cloud-spanner --alsologtostderr -v=1               │ addons-919243    │ jenkins │ v1.36.0 │ 04 Sep 25 04:18 UTC │ 04 Sep 25 04:18 UTC │
	│ addons     │ enable headlamp -p addons-919243 --alsologtostderr -v=1                         │ addons-919243    │ jenkins │ v1.36.0 │ 04 Sep 25 04:18 UTC │ 04 Sep 25 04:18 UTC │
	│ addons     │ addons-919243 addons disable nvidia-device-plugin --alsologtostderr -v=1        │ addons-919243    │ jenkins │ v1.36.0 │ 04 Sep 25 04:18 UTC │ 04 Sep 25 04:18 UTC │
	│ addons     │ addons-919243 addons disable headlamp --alsologtostderr -v=1                    │ addons-919243    │ jenkins │ v1.36.0 │ 04 Sep 25 04:18 UTC │ 04 Sep 25 04:18 UTC │
	│ ssh        │ addons-919243 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'        │ addons-919243    │ jenkins │ v1.36.0 │ 04 Sep 25 04:18 UTC │ 04 Sep 25 04:18 UTC │
	│ ip         │ addons-919243 ip                                                                │ addons-919243    │ jenkins │ v1.36.0 │ 04 Sep 25 04:18 UTC │ 04 Sep 25 04:18 UTC │
	│ addons     │ addons-919243 addons disable ingress-dns --alsologtostderr -v=1                 │ addons-919243    │ jenkins │ v1.36.0 │ 04 Sep 25 04:18 UTC │ 04 Sep 25 04:18 UTC │
	│ addons     │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-919243  │ addons-919243    │ jenkins │ v1.36.0 │ 04 Sep 25 04:18 UTC │ 04 Sep 25 04:18 UTC │
	│ addons     │ addons-919243 addons disable registry-creds --alsologtostderr -v=1              │ addons-919243    │ jenkins │ v1.36.0 │ 04 Sep 25 04:18 UTC │ 04 Sep 25 04:18 UTC │
	│ addons     │ addons-919243 addons disable ingress --alsologtostderr -v=1                     │ addons-919243    │ jenkins │ v1.36.0 │ 04 Sep 25 04:18 UTC │ 04 Sep 25 04:18 UTC │
	│ addons     │ addons-919243 addons disable volumesnapshots --alsologtostderr -v=1             │ addons-919243    │ jenkins │ v1.36.0 │ 04 Sep 25 04:18 UTC │ 04 Sep 25 04:18 UTC │
	│ addons     │ addons-919243 addons disable csi-hostpath-driver --alsologtostderr -v=1         │ addons-919243    │ jenkins │ v1.36.0 │ 04 Sep 25 04:18 UTC │ 04 Sep 25 04:18 UTC │
	│ stop       │ -p addons-919243                                                                │ addons-919243    │ jenkins │ v1.36.0 │ 04 Sep 25 04:18 UTC │ 04 Sep 25 04:18 UTC │
	│ addons     │ enable dashboard -p addons-919243                                               │ addons-919243    │ jenkins │ v1.36.0 │ 04 Sep 25 04:18 UTC │ 04 Sep 25 04:18 UTC │
	│ addons     │ disable dashboard -p addons-919243                                              │ addons-919243    │ jenkins │ v1.36.0 │ 04 Sep 25 04:18 UTC │ 04 Sep 25 04:18 UTC │
	│ addons     │ disable gvisor -p addons-919243                                                 │ addons-919243    │ jenkins │ v1.36.0 │ 04 Sep 25 04:18 UTC │ 04 Sep 25 04:18 UTC │
	│ delete     │ -p addons-919243                                                                │ addons-919243    │ jenkins │ v1.36.0 │ 04 Sep 25 04:18 UTC │ 04 Sep 25 04:18 UTC │
	│ start      │ -p dockerenv-217193 --driver=docker  --container-runtime=containerd             │ dockerenv-217193 │ jenkins │ v1.36.0 │ 04 Sep 25 04:18 UTC │ 04 Sep 25 04:19 UTC │
	│ docker-env │ --ssh-host --ssh-add -p dockerenv-217193                                        │ dockerenv-217193 │ jenkins │ v1.36.0 │ 04 Sep 25 04:19 UTC │ 04 Sep 25 04:19 UTC │
	└────────────┴─────────────────────────────────────────────────────────────────────────────────┴──────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 04:18:47
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 04:18:47.585871  414464 out.go:360] Setting OutFile to fd 1 ...
	I0904 04:18:47.585972  414464 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 04:18:47.585975  414464 out.go:374] Setting ErrFile to fd 2...
	I0904 04:18:47.585978  414464 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 04:18:47.586147  414464 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-385918/.minikube/bin
	I0904 04:18:47.586706  414464 out.go:368] Setting JSON to false
	I0904 04:18:47.587640  414464 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7271,"bootTime":1756952257,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 04:18:47.587725  414464 start.go:140] virtualization: kvm guest
	I0904 04:18:47.589803  414464 out.go:179] * [dockerenv-217193] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 04:18:47.590904  414464 out.go:179]   - MINIKUBE_LOCATION=21409
	I0904 04:18:47.590928  414464 notify.go:220] Checking for updates...
	I0904 04:18:47.592925  414464 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 04:18:47.594060  414464 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-385918/kubeconfig
	I0904 04:18:47.595085  414464 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-385918/.minikube
	I0904 04:18:47.596121  414464 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 04:18:47.597090  414464 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 04:18:47.598294  414464 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 04:18:47.620763  414464 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 04:18:47.620838  414464 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 04:18:47.668639  414464 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:43 SystemTime:2025-09-04 04:18:47.659459057 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 04:18:47.668755  414464 docker.go:318] overlay module found
	I0904 04:18:47.670772  414464 out.go:179] * Using the docker driver based on user configuration
	I0904 04:18:47.671734  414464 start.go:304] selected driver: docker
	I0904 04:18:47.671741  414464 start.go:918] validating driver "docker" against <nil>
	I0904 04:18:47.671750  414464 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 04:18:47.671848  414464 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 04:18:47.716557  414464 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:43 SystemTime:2025-09-04 04:18:47.708062418 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 04:18:47.716747  414464 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0904 04:18:47.717223  414464 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0904 04:18:47.717355  414464 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0904 04:18:47.718828  414464 out.go:179] * Using Docker driver with root privileges
	I0904 04:18:47.719947  414464 cni.go:84] Creating CNI manager for ""
	I0904 04:18:47.720002  414464 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0904 04:18:47.720010  414464 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0904 04:18:47.720068  414464 start.go:348] cluster config:
	{Name:dockerenv-217193 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:dockerenv-217193 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 04:18:47.721194  414464 out.go:179] * Starting "dockerenv-217193" primary control-plane node in "dockerenv-217193" cluster
	I0904 04:18:47.722093  414464 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0904 04:18:47.723065  414464 out.go:179] * Pulling base image v0.0.47-1756936034-21409 ...
	I0904 04:18:47.723992  414464 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0904 04:18:47.724015  414464 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21409-385918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0904 04:18:47.724027  414464 cache.go:58] Caching tarball of preloaded images
	I0904 04:18:47.724087  414464 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local docker daemon
	I0904 04:18:47.724146  414464 preload.go:172] Found /home/jenkins/minikube-integration/21409-385918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0904 04:18:47.724154  414464 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0904 04:18:47.724514  414464 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/dockerenv-217193/config.json ...
	I0904 04:18:47.724533  414464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/dockerenv-217193/config.json: {Name:mk6b1024e4e931e7400166a2e2d85f0d3c5ccacc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 04:18:47.743807  414464 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local docker daemon, skipping pull
	I0904 04:18:47.743817  414464 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc exists in daemon, skipping load
	I0904 04:18:47.743831  414464 cache.go:232] Successfully downloaded all kic artifacts
	I0904 04:18:47.743869  414464 start.go:360] acquireMachinesLock for dockerenv-217193: {Name:mk7ccc96e834e525fa5078113b5f9a42d3a0d4b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 04:18:47.743951  414464 start.go:364] duration metric: took 69.646µs to acquireMachinesLock for "dockerenv-217193"
	I0904 04:18:47.743968  414464 start.go:93] Provisioning new machine with config: &{Name:dockerenv-217193 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:dockerenv-217193 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0904 04:18:47.744022  414464 start.go:125] createHost starting for "" (driver="docker")
	I0904 04:18:47.745427  414464 out.go:252] * Creating docker container (CPUs=2, Memory=8000MB) ...
	I0904 04:18:47.745610  414464 start.go:159] libmachine.API.Create for "dockerenv-217193" (driver="docker")
	I0904 04:18:47.745632  414464 client.go:168] LocalClient.Create starting
	I0904 04:18:47.745712  414464 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-385918/.minikube/certs/ca.pem
	I0904 04:18:47.745740  414464 main.go:141] libmachine: Decoding PEM data...
	I0904 04:18:47.745753  414464 main.go:141] libmachine: Parsing certificate...
	I0904 04:18:47.745798  414464 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-385918/.minikube/certs/cert.pem
	I0904 04:18:47.745811  414464 main.go:141] libmachine: Decoding PEM data...
	I0904 04:18:47.745823  414464 main.go:141] libmachine: Parsing certificate...
	I0904 04:18:47.746127  414464 cli_runner.go:164] Run: docker network inspect dockerenv-217193 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0904 04:18:47.761460  414464 cli_runner.go:211] docker network inspect dockerenv-217193 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0904 04:18:47.761522  414464 network_create.go:284] running [docker network inspect dockerenv-217193] to gather additional debugging logs...
	I0904 04:18:47.761536  414464 cli_runner.go:164] Run: docker network inspect dockerenv-217193
	W0904 04:18:47.777465  414464 cli_runner.go:211] docker network inspect dockerenv-217193 returned with exit code 1
	I0904 04:18:47.777482  414464 network_create.go:287] error running [docker network inspect dockerenv-217193]: docker network inspect dockerenv-217193: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network dockerenv-217193 not found
	I0904 04:18:47.777491  414464 network_create.go:289] output of [docker network inspect dockerenv-217193]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network dockerenv-217193 not found
	
	** /stderr **
	I0904 04:18:47.777593  414464 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 04:18:47.793048  414464 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cc91b0}
	I0904 04:18:47.793087  414464 network_create.go:124] attempt to create docker network dockerenv-217193 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0904 04:18:47.793156  414464 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=dockerenv-217193 dockerenv-217193
	I0904 04:18:47.839801  414464 network_create.go:108] docker network dockerenv-217193 192.168.49.0/24 created
	I0904 04:18:47.839820  414464 kic.go:121] calculated static IP "192.168.49.2" for the "dockerenv-217193" container
	I0904 04:18:47.839897  414464 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0904 04:18:47.855351  414464 cli_runner.go:164] Run: docker volume create dockerenv-217193 --label name.minikube.sigs.k8s.io=dockerenv-217193 --label created_by.minikube.sigs.k8s.io=true
	I0904 04:18:47.871854  414464 oci.go:103] Successfully created a docker volume dockerenv-217193
	I0904 04:18:47.871909  414464 cli_runner.go:164] Run: docker run --rm --name dockerenv-217193-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-217193 --entrypoint /usr/bin/test -v dockerenv-217193:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc -d /var/lib
	I0904 04:18:48.313981  414464 oci.go:107] Successfully prepared a docker volume dockerenv-217193
	I0904 04:18:48.314022  414464 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0904 04:18:48.314044  414464 kic.go:194] Starting extracting preloaded images to volume ...
	I0904 04:18:48.314114  414464 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-385918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v dockerenv-217193:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc -I lz4 -xf /preloaded.tar -C /extractDir
	I0904 04:18:52.629705  414464 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-385918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v dockerenv-217193:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc -I lz4 -xf /preloaded.tar -C /extractDir: (4.315537917s)
	I0904 04:18:52.629748  414464 kic.go:203] duration metric: took 4.31570003s to extract preloaded images to volume ...
	W0904 04:18:52.630047  414464 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0904 04:18:52.630150  414464 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0904 04:18:52.674792  414464 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname dockerenv-217193 --name dockerenv-217193 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-217193 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=dockerenv-217193 --network dockerenv-217193 --ip 192.168.49.2 --volume dockerenv-217193:/var --security-opt apparmor=unconfined --memory=8000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc
	I0904 04:18:52.917444  414464 cli_runner.go:164] Run: docker container inspect dockerenv-217193 --format={{.State.Running}}
	I0904 04:18:52.934615  414464 cli_runner.go:164] Run: docker container inspect dockerenv-217193 --format={{.State.Status}}
	I0904 04:18:52.953430  414464 cli_runner.go:164] Run: docker exec dockerenv-217193 stat /var/lib/dpkg/alternatives/iptables
	I0904 04:18:52.992033  414464 oci.go:144] the created container "dockerenv-217193" has a running status.
	I0904 04:18:52.992058  414464 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-385918/.minikube/machines/dockerenv-217193/id_rsa...
	I0904 04:18:53.338861  414464 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-385918/.minikube/machines/dockerenv-217193/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0904 04:18:53.357826  414464 cli_runner.go:164] Run: docker container inspect dockerenv-217193 --format={{.State.Status}}
	I0904 04:18:53.374077  414464 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0904 04:18:53.374092  414464 kic_runner.go:114] Args: [docker exec --privileged dockerenv-217193 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0904 04:18:53.418903  414464 cli_runner.go:164] Run: docker container inspect dockerenv-217193 --format={{.State.Status}}
	I0904 04:18:53.439155  414464 machine.go:93] provisionDockerMachine start ...
	I0904 04:18:53.439252  414464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-217193
	I0904 04:18:53.458601  414464 main.go:141] libmachine: Using SSH client type: native
	I0904 04:18:53.458818  414464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0904 04:18:53.458826  414464 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 04:18:53.598744  414464 main.go:141] libmachine: SSH cmd err, output: <nil>: dockerenv-217193
	
	I0904 04:18:53.598771  414464 ubuntu.go:182] provisioning hostname "dockerenv-217193"
	I0904 04:18:53.598855  414464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-217193
	I0904 04:18:53.617785  414464 main.go:141] libmachine: Using SSH client type: native
	I0904 04:18:53.618020  414464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0904 04:18:53.618029  414464 main.go:141] libmachine: About to run SSH command:
	sudo hostname dockerenv-217193 && echo "dockerenv-217193" | sudo tee /etc/hostname
	I0904 04:18:53.748856  414464 main.go:141] libmachine: SSH cmd err, output: <nil>: dockerenv-217193
	
	I0904 04:18:53.748928  414464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-217193
	I0904 04:18:53.765637  414464 main.go:141] libmachine: Using SSH client type: native
	I0904 04:18:53.765837  414464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0904 04:18:53.765850  414464 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdockerenv-217193' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 dockerenv-217193/g' /etc/hosts;
				else 
					echo '127.0.1.1 dockerenv-217193' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 04:18:53.886720  414464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 04:18:53.886742  414464 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-385918/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-385918/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-385918/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-385918/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-385918/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-385918/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-385918/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-385918/.minikube}
	I0904 04:18:53.886766  414464 ubuntu.go:190] setting up certificates
	I0904 04:18:53.886776  414464 provision.go:84] configureAuth start
	I0904 04:18:53.886824  414464 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-217193
	I0904 04:18:53.903798  414464 provision.go:143] copyHostCerts
	I0904 04:18:53.903847  414464 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-385918/.minikube/key.pem, removing ...
	I0904 04:18:53.903856  414464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-385918/.minikube/key.pem
	I0904 04:18:53.903920  414464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-385918/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-385918/.minikube/key.pem (1675 bytes)
	I0904 04:18:53.904007  414464 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-385918/.minikube/ca.pem, removing ...
	I0904 04:18:53.904011  414464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-385918/.minikube/ca.pem
	I0904 04:18:53.904032  414464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-385918/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-385918/.minikube/ca.pem (1078 bytes)
	I0904 04:18:53.904079  414464 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-385918/.minikube/cert.pem, removing ...
	I0904 04:18:53.904082  414464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-385918/.minikube/cert.pem
	I0904 04:18:53.904100  414464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-385918/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-385918/.minikube/cert.pem (1123 bytes)
	I0904 04:18:53.904147  414464 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-385918/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-385918/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-385918/.minikube/certs/ca-key.pem org=jenkins.dockerenv-217193 san=[127.0.0.1 192.168.49.2 dockerenv-217193 localhost minikube]
	I0904 04:18:54.244726  414464 provision.go:177] copyRemoteCerts
	I0904 04:18:54.244770  414464 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 04:18:54.244811  414464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-217193
	I0904 04:18:54.261805  414464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21409-385918/.minikube/machines/dockerenv-217193/id_rsa Username:docker}
	I0904 04:18:54.347088  414464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-385918/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0904 04:18:54.368128  414464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-385918/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0904 04:18:54.388880  414464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-385918/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 04:18:54.409438  414464 provision.go:87] duration metric: took 522.646649ms to configureAuth
	I0904 04:18:54.409458  414464 ubuntu.go:206] setting minikube options for container-runtime
	I0904 04:18:54.409614  414464 config.go:182] Loaded profile config "dockerenv-217193": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0904 04:18:54.409619  414464 machine.go:96] duration metric: took 970.452111ms to provisionDockerMachine
	I0904 04:18:54.409625  414464 client.go:171] duration metric: took 6.663990098s to LocalClient.Create
	I0904 04:18:54.409647  414464 start.go:167] duration metric: took 6.664038268s to libmachine.API.Create "dockerenv-217193"
	I0904 04:18:54.409653  414464 start.go:293] postStartSetup for "dockerenv-217193" (driver="docker")
	I0904 04:18:54.409662  414464 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 04:18:54.409700  414464 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 04:18:54.409732  414464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-217193
	I0904 04:18:54.428124  414464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21409-385918/.minikube/machines/dockerenv-217193/id_rsa Username:docker}
	I0904 04:18:54.519674  414464 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 04:18:54.522680  414464 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0904 04:18:54.522698  414464 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0904 04:18:54.522704  414464 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0904 04:18:54.522712  414464 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0904 04:18:54.522722  414464 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-385918/.minikube/addons for local assets ...
	I0904 04:18:54.522767  414464 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-385918/.minikube/files for local assets ...
	I0904 04:18:54.522783  414464 start.go:296] duration metric: took 113.125344ms for postStartSetup
	I0904 04:18:54.523121  414464 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-217193
	I0904 04:18:54.540486  414464 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/dockerenv-217193/config.json ...
	I0904 04:18:54.540715  414464 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 04:18:54.540745  414464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-217193
	I0904 04:18:54.557231  414464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21409-385918/.minikube/machines/dockerenv-217193/id_rsa Username:docker}
	I0904 04:18:54.643469  414464 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0904 04:18:54.647411  414464 start.go:128] duration metric: took 6.903372472s to createHost
	I0904 04:18:54.647429  414464 start.go:83] releasing machines lock for "dockerenv-217193", held for 6.903471313s
	I0904 04:18:54.647502  414464 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-217193
	I0904 04:18:54.663595  414464 ssh_runner.go:195] Run: cat /version.json
	I0904 04:18:54.663636  414464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-217193
	I0904 04:18:54.663653  414464 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 04:18:54.663702  414464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-217193
	I0904 04:18:54.683492  414464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21409-385918/.minikube/machines/dockerenv-217193/id_rsa Username:docker}
	I0904 04:18:54.684993  414464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21409-385918/.minikube/machines/dockerenv-217193/id_rsa Username:docker}
	I0904 04:18:54.842391  414464 ssh_runner.go:195] Run: systemctl --version
	I0904 04:18:54.846446  414464 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0904 04:18:54.850569  414464 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0904 04:18:54.872727  414464 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0904 04:18:54.872778  414464 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 04:18:54.895548  414464 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0904 04:18:54.895561  414464 start.go:495] detecting cgroup driver to use...
	I0904 04:18:54.895592  414464 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0904 04:18:54.895642  414464 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0904 04:18:54.905998  414464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0904 04:18:54.915783  414464 docker.go:218] disabling cri-docker service (if available) ...
	I0904 04:18:54.915821  414464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 04:18:54.927802  414464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 04:18:54.940311  414464 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 04:18:55.010623  414464 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 04:18:55.087136  414464 docker.go:234] disabling docker service ...
	I0904 04:18:55.087185  414464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 04:18:55.106462  414464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 04:18:55.116591  414464 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 04:18:55.191697  414464 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 04:18:55.262005  414464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 04:18:55.272041  414464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 04:18:55.286239  414464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0904 04:18:55.294464  414464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0904 04:18:55.302741  414464 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0904 04:18:55.302777  414464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0904 04:18:55.311019  414464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0904 04:18:55.319302  414464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0904 04:18:55.327144  414464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0904 04:18:55.334961  414464 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 04:18:55.342446  414464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0904 04:18:55.350367  414464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0904 04:18:55.358432  414464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0904 04:18:55.366806  414464 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 04:18:55.373843  414464 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 04:18:55.380937  414464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 04:18:55.455264  414464 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0904 04:18:55.561438  414464 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0904 04:18:55.561488  414464 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0904 04:18:55.564912  414464 start.go:563] Will wait 60s for crictl version
	I0904 04:18:55.564952  414464 ssh_runner.go:195] Run: which crictl
	I0904 04:18:55.567856  414464 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 04:18:55.598626  414464 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0904 04:18:55.598684  414464 ssh_runner.go:195] Run: containerd --version
	I0904 04:18:55.620337  414464 ssh_runner.go:195] Run: containerd --version
	I0904 04:18:55.646230  414464 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0904 04:18:55.647258  414464 cli_runner.go:164] Run: docker network inspect dockerenv-217193 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 04:18:55.663264  414464 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0904 04:18:55.666734  414464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 04:18:55.676646  414464 kubeadm.go:875] updating cluster {Name:dockerenv-217193 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:dockerenv-217193 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 04:18:55.676743  414464 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0904 04:18:55.676787  414464 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 04:18:55.707404  414464 containerd.go:627] all images are preloaded for containerd runtime.
	I0904 04:18:55.707414  414464 containerd.go:534] Images already preloaded, skipping extraction
	I0904 04:18:55.707456  414464 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 04:18:55.738911  414464 containerd.go:627] all images are preloaded for containerd runtime.
	I0904 04:18:55.738924  414464 cache_images.go:85] Images are preloaded, skipping loading
	I0904 04:18:55.738930  414464 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 containerd true true} ...
	I0904 04:18:55.739015  414464 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=dockerenv-217193 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:dockerenv-217193 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 04:18:55.739063  414464 ssh_runner.go:195] Run: sudo crictl info
	I0904 04:18:55.769880  414464 cni.go:84] Creating CNI manager for ""
	I0904 04:18:55.769891  414464 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0904 04:18:55.769903  414464 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 04:18:55.769920  414464 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:dockerenv-217193 NodeName:dockerenv-217193 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 04:18:55.770025  414464 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "dockerenv-217193"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 04:18:55.770075  414464 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0904 04:18:55.777824  414464 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 04:18:55.777869  414464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 04:18:55.785730  414464 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0904 04:18:55.801373  414464 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 04:18:55.816650  414464 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I0904 04:18:55.832111  414464 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0904 04:18:55.835044  414464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 04:18:55.844481  414464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 04:18:55.913756  414464 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 04:18:55.925908  414464 certs.go:68] Setting up /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/dockerenv-217193 for IP: 192.168.49.2
	I0904 04:18:55.925921  414464 certs.go:194] generating shared ca certs ...
	I0904 04:18:55.925943  414464 certs.go:226] acquiring lock for ca certs: {Name:mk610706de434f58eb65dd97917b7c24a5e9f8b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 04:18:55.926091  414464 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-385918/.minikube/ca.key
	I0904 04:18:55.926138  414464 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-385918/.minikube/proxy-client-ca.key
	I0904 04:18:55.926146  414464 certs.go:256] generating profile certs ...
	I0904 04:18:55.926207  414464 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/dockerenv-217193/client.key
	I0904 04:18:55.926230  414464 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/dockerenv-217193/client.crt with IP's: []
	I0904 04:18:56.352438  414464 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/dockerenv-217193/client.crt ...
	I0904 04:18:56.352457  414464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/dockerenv-217193/client.crt: {Name:mke685c1ec821f6be12165e61eafd4eafce4f9ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 04:18:56.352656  414464 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/dockerenv-217193/client.key ...
	I0904 04:18:56.352663  414464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/dockerenv-217193/client.key: {Name:mkc5a50c58dbbaf41edb1786a83849d401615e66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 04:18:56.352751  414464 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/dockerenv-217193/apiserver.key.aae209bc
	I0904 04:18:56.352763  414464 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/dockerenv-217193/apiserver.crt.aae209bc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0904 04:18:56.860843  414464 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/dockerenv-217193/apiserver.crt.aae209bc ...
	I0904 04:18:56.860861  414464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/dockerenv-217193/apiserver.crt.aae209bc: {Name:mkda3b19353ad437ec208214cae5b6996422642a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 04:18:56.861033  414464 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/dockerenv-217193/apiserver.key.aae209bc ...
	I0904 04:18:56.861042  414464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/dockerenv-217193/apiserver.key.aae209bc: {Name:mka7fb08587b56e736280dfacdf26c70a2ef57d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 04:18:56.861112  414464 certs.go:381] copying /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/dockerenv-217193/apiserver.crt.aae209bc -> /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/dockerenv-217193/apiserver.crt
	I0904 04:18:56.861180  414464 certs.go:385] copying /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/dockerenv-217193/apiserver.key.aae209bc -> /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/dockerenv-217193/apiserver.key
	I0904 04:18:56.861225  414464 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/dockerenv-217193/proxy-client.key
	I0904 04:18:56.861235  414464 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/dockerenv-217193/proxy-client.crt with IP's: []
	I0904 04:18:56.996977  414464 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/dockerenv-217193/proxy-client.crt ...
	I0904 04:18:56.996994  414464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/dockerenv-217193/proxy-client.crt: {Name:mk3ca3675ce85cbdbe51e0f8b2b33a9b62f5b1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 04:18:56.997181  414464 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/dockerenv-217193/proxy-client.key ...
	I0904 04:18:56.997189  414464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/dockerenv-217193/proxy-client.key: {Name:mk09e811d38e9dbb18f8babfd8b072798e45fc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 04:18:56.997363  414464 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-385918/.minikube/certs/ca-key.pem (1679 bytes)
	I0904 04:18:56.997396  414464 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-385918/.minikube/certs/ca.pem (1078 bytes)
	I0904 04:18:56.997425  414464 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-385918/.minikube/certs/cert.pem (1123 bytes)
	I0904 04:18:56.997444  414464 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-385918/.minikube/certs/key.pem (1675 bytes)
	I0904 04:18:56.998115  414464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-385918/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 04:18:57.020742  414464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-385918/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0904 04:18:57.041990  414464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-385918/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 04:18:57.063296  414464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-385918/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0904 04:18:57.084333  414464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/dockerenv-217193/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0904 04:18:57.104814  414464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/dockerenv-217193/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0904 04:18:57.125456  414464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/dockerenv-217193/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 04:18:57.147189  414464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/dockerenv-217193/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0904 04:18:57.168510  414464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-385918/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 04:18:57.190947  414464 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 04:18:57.207178  414464 ssh_runner.go:195] Run: openssl version
	I0904 04:18:57.211993  414464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 04:18:57.220021  414464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 04:18:57.222943  414464 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 04:13 /usr/share/ca-certificates/minikubeCA.pem
	I0904 04:18:57.222989  414464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 04:18:57.228913  414464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 04:18:57.237384  414464 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 04:18:57.240536  414464 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0904 04:18:57.240570  414464 kubeadm.go:392] StartCluster: {Name:dockerenv-217193 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:dockerenv-217193 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 04:18:57.240626  414464 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0904 04:18:57.240667  414464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 04:18:57.272918  414464 cri.go:89] found id: ""
	I0904 04:18:57.272968  414464 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 04:18:57.281136  414464 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0904 04:18:57.289119  414464 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0904 04:18:57.289170  414464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0904 04:18:57.296729  414464 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0904 04:18:57.296736  414464 kubeadm.go:157] found existing configuration files:
	
	I0904 04:18:57.296771  414464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0904 04:18:57.303977  414464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0904 04:18:57.304014  414464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0904 04:18:57.311079  414464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0904 04:18:57.318461  414464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0904 04:18:57.318505  414464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0904 04:18:57.326086  414464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0904 04:18:57.333929  414464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0904 04:18:57.333966  414464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0904 04:18:57.341608  414464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0904 04:18:57.349301  414464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0904 04:18:57.349339  414464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0904 04:18:57.356775  414464 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0904 04:18:57.393291  414464 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0904 04:18:57.393339  414464 kubeadm.go:310] [preflight] Running pre-flight checks
	I0904 04:18:57.409318  414464 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0904 04:18:57.409393  414464 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1083-gcp
	I0904 04:18:57.409433  414464 kubeadm.go:310] OS: Linux
	I0904 04:18:57.409490  414464 kubeadm.go:310] CGROUPS_CPU: enabled
	I0904 04:18:57.409575  414464 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0904 04:18:57.409659  414464 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0904 04:18:57.409709  414464 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0904 04:18:57.409749  414464 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0904 04:18:57.409803  414464 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0904 04:18:57.409843  414464 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0904 04:18:57.409883  414464 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0904 04:18:57.409936  414464 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0904 04:18:57.461305  414464 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0904 04:18:57.461506  414464 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0904 04:18:57.461625  414464 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0904 04:18:57.466531  414464 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0904 04:18:57.469335  414464 out.go:252]   - Generating certificates and keys ...
	I0904 04:18:57.469438  414464 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0904 04:18:57.469527  414464 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0904 04:18:57.921035  414464 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0904 04:18:58.080421  414464 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0904 04:18:58.614152  414464 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0904 04:18:58.817713  414464 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0904 04:18:59.008514  414464 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0904 04:18:59.008692  414464 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [dockerenv-217193 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0904 04:18:59.356284  414464 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0904 04:18:59.356394  414464 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [dockerenv-217193 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0904 04:18:59.396235  414464 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0904 04:18:59.574791  414464 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0904 04:18:59.983321  414464 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0904 04:18:59.983500  414464 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0904 04:19:00.222498  414464 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0904 04:19:00.359681  414464 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0904 04:19:01.127200  414464 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0904 04:19:01.582565  414464 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0904 04:19:02.049700  414464 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0904 04:19:02.050096  414464 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0904 04:19:02.052228  414464 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0904 04:19:02.054128  414464 out.go:252]   - Booting up control plane ...
	I0904 04:19:02.054256  414464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0904 04:19:02.054324  414464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0904 04:19:02.054388  414464 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0904 04:19:02.063039  414464 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0904 04:19:02.063165  414464 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0904 04:19:02.069553  414464 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0904 04:19:02.069977  414464 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0904 04:19:02.070013  414464 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0904 04:19:02.148681  414464 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0904 04:19:02.148777  414464 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0904 04:19:03.184995  414464 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.035790903s
	I0904 04:19:03.190002  414464 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0904 04:19:03.190127  414464 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0904 04:19:03.190283  414464 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0904 04:19:03.190389  414464 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0904 04:19:05.303486  414464 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.113375004s
	I0904 04:19:06.305696  414464 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.115689782s
	I0904 04:19:08.191630  414464 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.001556828s
	I0904 04:19:08.202318  414464 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0904 04:19:08.211805  414464 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0904 04:19:08.219316  414464 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0904 04:19:08.219481  414464 kubeadm.go:310] [mark-control-plane] Marking the node dockerenv-217193 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0904 04:19:08.226878  414464 kubeadm.go:310] [bootstrap-token] Using token: 9bnilw.7c3gesdn7po0bmui
	I0904 04:19:08.228095  414464 out.go:252]   - Configuring RBAC rules ...
	I0904 04:19:08.228235  414464 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0904 04:19:08.231704  414464 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0904 04:19:08.237356  414464 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0904 04:19:08.239718  414464 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0904 04:19:08.242100  414464 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0904 04:19:08.244647  414464 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0904 04:19:08.597116  414464 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0904 04:19:09.013700  414464 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0904 04:19:09.598017  414464 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0904 04:19:09.598875  414464 kubeadm.go:310] 
	I0904 04:19:09.598930  414464 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0904 04:19:09.598934  414464 kubeadm.go:310] 
	I0904 04:19:09.598998  414464 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0904 04:19:09.599001  414464 kubeadm.go:310] 
	I0904 04:19:09.599038  414464 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0904 04:19:09.599088  414464 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0904 04:19:09.599125  414464 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0904 04:19:09.599128  414464 kubeadm.go:310] 
	I0904 04:19:09.599172  414464 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0904 04:19:09.599177  414464 kubeadm.go:310] 
	I0904 04:19:09.599213  414464 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0904 04:19:09.599216  414464 kubeadm.go:310] 
	I0904 04:19:09.599255  414464 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0904 04:19:09.599317  414464 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0904 04:19:09.599368  414464 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0904 04:19:09.599371  414464 kubeadm.go:310] 
	I0904 04:19:09.599450  414464 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0904 04:19:09.599508  414464 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0904 04:19:09.599510  414464 kubeadm.go:310] 
	I0904 04:19:09.599601  414464 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9bnilw.7c3gesdn7po0bmui \
	I0904 04:19:09.599685  414464 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:23a2f3c6605ae485931544405f3f71ce4698de62429327be8a7935a80b3bf3e4 \
	I0904 04:19:09.599700  414464 kubeadm.go:310] 	--control-plane 
	I0904 04:19:09.599703  414464 kubeadm.go:310] 
	I0904 04:19:09.599767  414464 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0904 04:19:09.599784  414464 kubeadm.go:310] 
	I0904 04:19:09.599896  414464 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9bnilw.7c3gesdn7po0bmui \
	I0904 04:19:09.600021  414464 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:23a2f3c6605ae485931544405f3f71ce4698de62429327be8a7935a80b3bf3e4 
	I0904 04:19:09.602731  414464 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0904 04:19:09.602970  414464 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0904 04:19:09.603076  414464 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0904 04:19:09.603116  414464 cni.go:84] Creating CNI manager for ""
	I0904 04:19:09.603125  414464 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0904 04:19:09.604516  414464 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0904 04:19:09.605475  414464 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0904 04:19:09.609124  414464 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0904 04:19:09.609133  414464 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0904 04:19:09.625323  414464 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0904 04:19:09.818893  414464 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0904 04:19:09.818949  414464 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 04:19:09.818978  414464 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes dockerenv-217193 minikube.k8s.io/updated_at=2025_09_04T04_19_09_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=3abc733bafe6a1418dd7bd66760037215e6f0530 minikube.k8s.io/name=dockerenv-217193 minikube.k8s.io/primary=true
	I0904 04:19:09.826411  414464 ops.go:34] apiserver oom_adj: -16
	I0904 04:19:09.914911  414464 kubeadm.go:1105] duration metric: took 96.008209ms to wait for elevateKubeSystemPrivileges
	I0904 04:19:09.914935  414464 kubeadm.go:394] duration metric: took 12.674369943s to StartCluster
	I0904 04:19:09.914954  414464 settings.go:142] acquiring lock: {Name:mk8f6cb14c2459372c45d893ebfdcf0fb4723051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 04:19:09.915016  414464 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-385918/kubeconfig
	I0904 04:19:09.915720  414464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-385918/kubeconfig: {Name:mkd65c9fc5b98524fc254dfc0926c25e1ae26b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 04:19:09.915929  414464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0904 04:19:09.915925  414464 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0904 04:19:09.916005  414464 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0904 04:19:09.916120  414464 addons.go:69] Setting storage-provisioner=true in profile "dockerenv-217193"
	I0904 04:19:09.916135  414464 addons.go:238] Setting addon storage-provisioner=true in "dockerenv-217193"
	I0904 04:19:09.916157  414464 config.go:182] Loaded profile config "dockerenv-217193": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0904 04:19:09.916167  414464 host.go:66] Checking if "dockerenv-217193" exists ...
	I0904 04:19:09.916171  414464 addons.go:69] Setting default-storageclass=true in profile "dockerenv-217193"
	I0904 04:19:09.916222  414464 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "dockerenv-217193"
	I0904 04:19:09.916594  414464 cli_runner.go:164] Run: docker container inspect dockerenv-217193 --format={{.State.Status}}
	I0904 04:19:09.916767  414464 cli_runner.go:164] Run: docker container inspect dockerenv-217193 --format={{.State.Status}}
	I0904 04:19:09.920635  414464 out.go:179] * Verifying Kubernetes components...
	I0904 04:19:09.921836  414464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 04:19:09.944960  414464 addons.go:238] Setting addon default-storageclass=true in "dockerenv-217193"
	I0904 04:19:09.944990  414464 host.go:66] Checking if "dockerenv-217193" exists ...
	I0904 04:19:09.945356  414464 cli_runner.go:164] Run: docker container inspect dockerenv-217193 --format={{.State.Status}}
	I0904 04:19:09.945936  414464 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 04:19:09.946944  414464 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 04:19:09.946955  414464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0904 04:19:09.946995  414464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-217193
	I0904 04:19:09.962877  414464 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0904 04:19:09.962897  414464 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0904 04:19:09.962965  414464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-217193
	I0904 04:19:09.963629  414464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21409-385918/.minikube/machines/dockerenv-217193/id_rsa Username:docker}
	I0904 04:19:09.979441  414464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21409-385918/.minikube/machines/dockerenv-217193/id_rsa Username:docker}
	I0904 04:19:10.111405  414464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0904 04:19:10.117040  414464 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 04:19:10.200101  414464 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0904 04:19:10.203475  414464 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 04:19:10.487367  414464 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0904 04:19:10.488404  414464 api_server.go:52] waiting for apiserver process to appear ...
	I0904 04:19:10.488449  414464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 04:19:10.647457  414464 api_server.go:72] duration metric: took 731.50395ms to wait for apiserver process to appear ...
	I0904 04:19:10.647476  414464 api_server.go:88] waiting for apiserver healthz status ...
	I0904 04:19:10.647497  414464 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 04:19:10.648572  414464 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0904 04:19:10.649455  414464 addons.go:514] duration metric: took 733.445979ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0904 04:19:10.652942  414464 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0904 04:19:10.654026  414464 api_server.go:141] control plane version: v1.34.0
	I0904 04:19:10.654054  414464 api_server.go:131] duration metric: took 6.572691ms to wait for apiserver health ...
	I0904 04:19:10.654074  414464 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 04:19:10.660627  414464 system_pods.go:59] 5 kube-system pods found
	I0904 04:19:10.660651  414464 system_pods.go:61] "etcd-dockerenv-217193" [08c8c8a2-d659-4ef0-8cdf-0c2146c1df0a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 04:19:10.660659  414464 system_pods.go:61] "kube-apiserver-dockerenv-217193" [dbddf88b-5005-453b-a120-543e88d4aff4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 04:19:10.660667  414464 system_pods.go:61] "kube-controller-manager-dockerenv-217193" [342d8967-72a7-4537-b86d-ce724e452a1f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 04:19:10.660674  414464 system_pods.go:61] "kube-scheduler-dockerenv-217193" [47cd1f6f-3506-4e8c-ac8f-8ffac2a1aca1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 04:19:10.660679  414464 system_pods.go:61] "storage-provisioner" [ffb965cc-c095-4882-8791-04705cf7da12] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0904 04:19:10.660686  414464 system_pods.go:74] duration metric: took 6.605855ms to wait for pod list to return data ...
	I0904 04:19:10.660699  414464 kubeadm.go:578] duration metric: took 744.753705ms to wait for: map[apiserver:true system_pods:true]
	I0904 04:19:10.660713  414464 node_conditions.go:102] verifying NodePressure condition ...
	I0904 04:19:10.688802  414464 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0904 04:19:10.688826  414464 node_conditions.go:123] node cpu capacity is 8
	I0904 04:19:10.688840  414464 node_conditions.go:105] duration metric: took 28.121852ms to run NodePressure ...
	I0904 04:19:10.688857  414464 start.go:241] waiting for startup goroutines ...
	I0904 04:19:10.991147  414464 kapi.go:214] "coredns" deployment in "kube-system" namespace and "dockerenv-217193" context rescaled to 1 replicas
	I0904 04:19:10.991174  414464 start.go:246] waiting for cluster config update ...
	I0904 04:19:10.991184  414464 start.go:255] writing updated cluster config ...
	I0904 04:19:10.991463  414464 ssh_runner.go:195] Run: rm -f paused
	I0904 04:19:11.037473  414464 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0904 04:19:11.039046  414464 out.go:179] * Done! kubectl is now configured to use "dockerenv-217193" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	23323c4022a5b       6e38f40d628db       10 seconds ago      Running             storage-provisioner       0                   0f69dcc06d55e       storage-provisioner
	5bc6827155b31       409467f978b4a       10 seconds ago      Running             kindnet-cni               0                   54eed6739b651       kindnet-mnwr5
	c1a1346561256       df0860106674d       10 seconds ago      Running             kube-proxy                0                   c04e59eb842bf       kube-proxy-9pb58
	648b293f7c73c       46169d968e920       22 seconds ago      Running             kube-scheduler            0                   af0d190160327       kube-scheduler-dockerenv-217193
	0f65722abb6ff       5f1f5298c888d       22 seconds ago      Running             etcd                      0                   224118f65eaa7       etcd-dockerenv-217193
	11708c7261969       90550c43ad2bc       22 seconds ago      Running             kube-apiserver            0                   d2873668b9312       kube-apiserver-dockerenv-217193
	2d1ede8793cd4       a0af72f2ec6d6       22 seconds ago      Running             kube-controller-manager   0                   30d37c312ab36       kube-controller-manager-dockerenv-217193
	
	
	==> containerd <==
	Sep 04 04:19:03 dockerenv-217193 containerd[870]: time="2025-09-04T04:19:03.512710280Z" level=info msg="StartContainer for \"648b293f7c73c0466d83687e06eda5c577b3a24c874c1b0e36d83326e2367613\" returns successfully"
	Sep 04 04:19:14 dockerenv-217193 containerd[870]: time="2025-09-04T04:19:14.457901130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9pb58,Uid:0e0b3d30-8a2f-4faa-b64f-aca99f78f6e1,Namespace:kube-system,Attempt:0,}"
	Sep 04 04:19:14 dockerenv-217193 containerd[870]: time="2025-09-04T04:19:14.458943975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-mnwr5,Uid:9f21759e-20e4-449e-9e58-bd5928a0a693,Namespace:kube-system,Attempt:0,}"
	Sep 04 04:19:14 dockerenv-217193 containerd[870]: time="2025-09-04T04:19:14.513332282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9pb58,Uid:0e0b3d30-8a2f-4faa-b64f-aca99f78f6e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"c04e59eb842bf3ac778c7b3e4dcbb5e87379940c008ff79adcebc7c6f8985308\""
	Sep 04 04:19:14 dockerenv-217193 containerd[870]: time="2025-09-04T04:19:14.518501202Z" level=info msg="CreateContainer within sandbox \"c04e59eb842bf3ac778c7b3e4dcbb5e87379940c008ff79adcebc7c6f8985308\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Sep 04 04:19:14 dockerenv-217193 containerd[870]: time="2025-09-04T04:19:14.528085486Z" level=info msg="CreateContainer within sandbox \"c04e59eb842bf3ac778c7b3e4dcbb5e87379940c008ff79adcebc7c6f8985308\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c1a13465612563defe350071cbafa9a0879fda386e30f8921a8e251dcc0bbf6e\""
	Sep 04 04:19:14 dockerenv-217193 containerd[870]: time="2025-09-04T04:19:14.528541684Z" level=info msg="StartContainer for \"c1a13465612563defe350071cbafa9a0879fda386e30f8921a8e251dcc0bbf6e\""
	Sep 04 04:19:14 dockerenv-217193 containerd[870]: time="2025-09-04T04:19:14.578329282Z" level=info msg="StartContainer for \"c1a13465612563defe350071cbafa9a0879fda386e30f8921a8e251dcc0bbf6e\" returns successfully"
	Sep 04 04:19:14 dockerenv-217193 containerd[870]: time="2025-09-04T04:19:14.745994765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8gzd7,Uid:adb705d0-af19-4565-8472-9065c0285819,Namespace:kube-system,Attempt:0,}"
	Sep 04 04:19:14 dockerenv-217193 containerd[870]: time="2025-09-04T04:19:14.763597353Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8gzd7,Uid:adb705d0-af19-4565-8472-9065c0285819,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"454448924ca646766924f8271c3f77cdff4d330b0284a90ceb3fe2128ee12d81\": failed to find network info for sandbox \"454448924ca646766924f8271c3f77cdff4d330b0284a90ceb3fe2128ee12d81\""
	Sep 04 04:19:14 dockerenv-217193 containerd[870]: time="2025-09-04T04:19:14.795593708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-mnwr5,Uid:9f21759e-20e4-449e-9e58-bd5928a0a693,Namespace:kube-system,Attempt:0,} returns sandbox id \"54eed6739b6515f609593324777c130f3195e002ba224d7f48fc99fa4f48db35\""
	Sep 04 04:19:14 dockerenv-217193 containerd[870]: time="2025-09-04T04:19:14.800045549Z" level=info msg="CreateContainer within sandbox \"54eed6739b6515f609593324777c130f3195e002ba224d7f48fc99fa4f48db35\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Sep 04 04:19:14 dockerenv-217193 containerd[870]: time="2025-09-04T04:19:14.808287800Z" level=info msg="CreateContainer within sandbox \"54eed6739b6515f609593324777c130f3195e002ba224d7f48fc99fa4f48db35\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"5bc6827155b3191b26297acb4221cf3df4a391076cb72be237636880a06a4a84\""
	Sep 04 04:19:14 dockerenv-217193 containerd[870]: time="2025-09-04T04:19:14.808716500Z" level=info msg="StartContainer for \"5bc6827155b3191b26297acb4221cf3df4a391076cb72be237636880a06a4a84\""
	Sep 04 04:19:14 dockerenv-217193 containerd[870]: time="2025-09-04T04:19:14.898077301Z" level=info msg="StartContainer for \"5bc6827155b3191b26297acb4221cf3df4a391076cb72be237636880a06a4a84\" returns successfully"
	Sep 04 04:19:14 dockerenv-217193 containerd[870]: time="2025-09-04T04:19:14.992898107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:ffb965cc-c095-4882-8791-04705cf7da12,Namespace:kube-system,Attempt:0,}"
	Sep 04 04:19:15 dockerenv-217193 containerd[870]: time="2025-09-04T04:19:15.060161995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:ffb965cc-c095-4882-8791-04705cf7da12,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f69dcc06d55e169451195e23a8cc3177dee08f2a075474f170791b9cb87a81a\""
	Sep 04 04:19:15 dockerenv-217193 containerd[870]: time="2025-09-04T04:19:15.065339381Z" level=info msg="CreateContainer within sandbox \"0f69dcc06d55e169451195e23a8cc3177dee08f2a075474f170791b9cb87a81a\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Sep 04 04:19:15 dockerenv-217193 containerd[870]: time="2025-09-04T04:19:15.090532939Z" level=info msg="CreateContainer within sandbox \"0f69dcc06d55e169451195e23a8cc3177dee08f2a075474f170791b9cb87a81a\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"23323c4022a5b50574cb1782e4c351d3df7f0268c7878fa1b53aec138739eb3f\""
	Sep 04 04:19:15 dockerenv-217193 containerd[870]: time="2025-09-04T04:19:15.091172989Z" level=info msg="StartContainer for \"23323c4022a5b50574cb1782e4c351d3df7f0268c7878fa1b53aec138739eb3f\""
	Sep 04 04:19:15 dockerenv-217193 containerd[870]: time="2025-09-04T04:19:15.131228733Z" level=info msg="StartContainer for \"23323c4022a5b50574cb1782e4c351d3df7f0268c7878fa1b53aec138739eb3f\" returns successfully"
	Sep 04 04:19:19 dockerenv-217193 containerd[870]: time="2025-09-04T04:19:19.334423589Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Sep 04 04:19:23 dockerenv-217193 containerd[870]: time="2025-09-04T04:19:23.703565859Z" level=info msg="ImageCreate event name:\"docker.io/local/minikube-dockerenv-containerd-test:latest\""
	Sep 04 04:19:23 dockerenv-217193 containerd[870]: time="2025-09-04T04:19:23.709720282Z" level=info msg="ImageCreate event name:\"sha256:b5071690d691e592d1838713d34f6e17359e609f6f72854cb670728c823ff7a7\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 04 04:19:23 dockerenv-217193 containerd[870]: time="2025-09-04T04:19:23.710198243Z" level=info msg="ImageUpdate event name:\"docker.io/local/minikube-dockerenv-containerd-test:latest\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	
	
	==> describe nodes <==
	Name:               dockerenv-217193
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=dockerenv-217193
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3abc733bafe6a1418dd7bd66760037215e6f0530
	                    minikube.k8s.io/name=dockerenv-217193
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_04T04_19_09_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Sep 2025 04:19:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  dockerenv-217193
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Sep 2025 04:19:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Sep 2025 04:19:19 +0000   Thu, 04 Sep 2025 04:19:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Sep 2025 04:19:19 +0000   Thu, 04 Sep 2025 04:19:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Sep 2025 04:19:19 +0000   Thu, 04 Sep 2025 04:19:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Sep 2025 04:19:19 +0000   Thu, 04 Sep 2025 04:19:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    dockerenv-217193
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	System Info:
	  Machine ID:                 030de4d86bc9467f946a61ba6b5d9099
	  System UUID:                9a5b297c-2a4f-4722-af54-fb8051a0fe0f
	  Boot ID:                    68caae6e-4dcf-4a37-934f-61939f76c834
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-8gzd7                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11s
	  kube-system                 etcd-dockerenv-217193                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         16s
	  kube-system                 kindnet-mnwr5                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11s
	  kube-system                 kube-apiserver-dockerenv-217193             250m (3%)     0 (0%)      0 (0%)           0 (0%)         16s
	  kube-system                 kube-controller-manager-dockerenv-217193    200m (2%)     0 (0%)      0 (0%)           0 (0%)         18s
	  kube-system                 kube-proxy-9pb58                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 kube-scheduler-dockerenv-217193             100m (1%)     0 (0%)      0 (0%)           0 (0%)         16s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 10s   kube-proxy       
	  Normal   Starting                 17s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 17s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  17s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16s   kubelet          Node dockerenv-217193 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16s   kubelet          Node dockerenv-217193 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16s   kubelet          Node dockerenv-217193 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12s   node-controller  Node dockerenv-217193 event: Registered Node dockerenv-217193 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da c6 e3 f7 65 46 08 06
	[  +0.000327] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 02 ac 13 3b 75 f6 08 06
	[Sep 4 03:55] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 db a8 31 e2 62 08 06
	[  +1.064536] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 c8 72 6f 1b fa 08 06
	[  +0.014846] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 db a8 31 e2 62 08 06
	[ +31.300168] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 55 d4 db 2a 96 08 06
	[  +2.589730] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000044] ll header: 00000000: ff ff ff ff ff ff 16 2c a6 c1 c4 1b 08 06
	[  +6.063495] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe 49 80 8e 10 d6 08 06
	[  +0.000382] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 22 c8 72 6f 1b fa 08 06
	[  +3.577529] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 db c1 e7 c7 5b 08 06
	[  +0.000364] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 55 d4 db 2a 96 08 06
	[  +8.471961] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 04 f5 69 b3 bb 08 06
	[  +0.000327] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 16 2c a6 c1 c4 1b 08 06
	
	
	==> etcd [0f65722abb6ffc705c02ed98ef2211988a73747afc305f6919cab7e842be1c8c] <==
	{"level":"warn","ts":"2025-09-04T04:19:05.324560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T04:19:05.331048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T04:19:05.386869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T04:19:05.392973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T04:19:05.399455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T04:19:05.410294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T04:19:05.417129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T04:19:05.429449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T04:19:05.435811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T04:19:05.459062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T04:19:05.484900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T04:19:05.490792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T04:19:05.496693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T04:19:05.503472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T04:19:05.509060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T04:19:05.515167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T04:19:05.521937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T04:19:05.528751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T04:19:05.534980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T04:19:05.541684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T04:19:05.547525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T04:19:05.589698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T04:19:05.597205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T04:19:05.603224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T04:19:05.687781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54048","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 04:19:25 up  2:01,  0 users,  load average: 1.16, 1.13, 1.05
	Linux dockerenv-217193 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [5bc6827155b3191b26297acb4221cf3df4a391076cb72be237636880a06a4a84] <==
	I0904 04:19:15.086074       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0904 04:19:15.086285       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0904 04:19:15.086428       1 main.go:148] setting mtu 1500 for CNI 
	I0904 04:19:15.086447       1 main.go:178] kindnetd IP family: "ipv4"
	I0904 04:19:15.086460       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-04T04:19:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0904 04:19:15.292522       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0904 04:19:15.292541       1 controller.go:381] "Waiting for informer caches to sync"
	I0904 04:19:15.292549       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0904 04:19:15.292661       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0904 04:19:15.692860       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0904 04:19:15.692899       1 metrics.go:72] Registering metrics
	I0904 04:19:15.692959       1 controller.go:711] "Syncing nftables rules"
	I0904 04:19:25.295002       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 04:19:25.295063       1 main.go:301] handling current node
	
	
	==> kube-apiserver [11708c726196969bffe75fd9a1162adc6655ccf8a3ceccc143f9e0172d8035dd] <==
	I0904 04:19:06.389834       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0904 04:19:06.389862       1 policy_source.go:240] refreshing policies
	I0904 04:19:06.409348       1 controller.go:667] quota admission added evaluator for: namespaces
	I0904 04:19:06.448349       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0904 04:19:06.448349       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I0904 04:19:06.451894       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0904 04:19:06.451914       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I0904 04:19:06.547048       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0904 04:19:07.213016       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0904 04:19:07.216654       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0904 04:19:07.216678       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0904 04:19:07.675408       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0904 04:19:07.708540       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0904 04:19:07.816478       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0904 04:19:07.822152       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0904 04:19:07.823071       1 controller.go:667] quota admission added evaluator for: endpoints
	I0904 04:19:07.826458       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0904 04:19:08.231574       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0904 04:19:09.004458       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0904 04:19:09.012872       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0904 04:19:09.019822       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0904 04:19:14.032344       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0904 04:19:14.132697       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0904 04:19:14.183517       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0904 04:19:14.186527       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [2d1ede8793cd4e5c0407bceaf56d3b6fffa01bbca4e588dd446f277edd822313] <==
	I0904 04:19:13.229535       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0904 04:19:13.230721       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0904 04:19:13.230740       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0904 04:19:13.230763       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0904 04:19:13.230859       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0904 04:19:13.230916       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0904 04:19:13.230898       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0904 04:19:13.230925       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0904 04:19:13.230874       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0904 04:19:13.231463       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0904 04:19:13.231488       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0904 04:19:13.232045       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0904 04:19:13.232273       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0904 04:19:13.234547       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0904 04:19:13.234599       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0904 04:19:13.234658       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0904 04:19:13.234696       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0904 04:19:13.234708       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0904 04:19:13.234717       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0904 04:19:13.236821       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0904 04:19:13.236937       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0904 04:19:13.240277       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0904 04:19:13.240579       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="dockerenv-217193" podCIDRs=["10.244.0.0/24"]
	I0904 04:19:13.245501       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0904 04:19:13.251781       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [c1a13465612563defe350071cbafa9a0879fda386e30f8921a8e251dcc0bbf6e] <==
	I0904 04:19:14.608281       1 server_linux.go:53] "Using iptables proxy"
	I0904 04:19:14.736031       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0904 04:19:14.836409       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0904 04:19:14.836445       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0904 04:19:14.836569       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 04:19:14.854400       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0904 04:19:14.854469       1 server_linux.go:132] "Using iptables Proxier"
	I0904 04:19:14.858364       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 04:19:14.858802       1 server.go:527] "Version info" version="v1.34.0"
	I0904 04:19:14.858861       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 04:19:14.860076       1 config.go:403] "Starting serviceCIDR config controller"
	I0904 04:19:14.860118       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0904 04:19:14.860130       1 config.go:200] "Starting service config controller"
	I0904 04:19:14.860133       1 config.go:106] "Starting endpoint slice config controller"
	I0904 04:19:14.860182       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0904 04:19:14.860149       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0904 04:19:14.860213       1 config.go:309] "Starting node config controller"
	I0904 04:19:14.860278       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0904 04:19:14.860285       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0904 04:19:14.960280       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0904 04:19:14.960303       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0904 04:19:14.960959       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [648b293f7c73c0466d83687e06eda5c577b3a24c874c1b0e36d83326e2367613] <==
	E0904 04:19:06.304076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0904 04:19:06.303997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0904 04:19:06.304140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0904 04:19:06.304140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0904 04:19:06.304197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0904 04:19:06.304228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0904 04:19:06.304311       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0904 04:19:06.304370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0904 04:19:06.304380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0904 04:19:06.304413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0904 04:19:06.304431       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0904 04:19:06.304433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0904 04:19:06.304511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0904 04:19:06.304615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0904 04:19:07.112723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0904 04:19:07.114565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0904 04:19:07.155035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0904 04:19:07.181241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0904 04:19:07.367515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0904 04:19:07.368556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0904 04:19:07.375650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0904 04:19:07.434337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0904 04:19:07.449279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0904 04:19:07.505770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0904 04:19:07.900900       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 04 04:19:13 dockerenv-217193 kubelet[1669]: E0904 04:19:13.314379    1669 projected.go:196] Error preparing data for projected volume kube-api-access-lsv98 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 04 04:19:13 dockerenv-217193 kubelet[1669]: E0904 04:19:13.314505    1669 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ffb965cc-c095-4882-8791-04705cf7da12-kube-api-access-lsv98 podName:ffb965cc-c095-4882-8791-04705cf7da12 nodeName:}" failed. No retries permitted until 2025-09-04 04:19:13.814476936 +0000 UTC m=+5.032698384 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lsv98" (UniqueName: "kubernetes.io/projected/ffb965cc-c095-4882-8791-04705cf7da12-kube-api-access-lsv98") pod "storage-provisioner" (UID: "ffb965cc-c095-4882-8791-04705cf7da12") : configmap "kube-root-ca.crt" not found
	Sep 04 04:19:13 dockerenv-217193 kubelet[1669]: E0904 04:19:13.911479    1669 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 04 04:19:13 dockerenv-217193 kubelet[1669]: E0904 04:19:13.911509    1669 projected.go:196] Error preparing data for projected volume kube-api-access-lsv98 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 04 04:19:13 dockerenv-217193 kubelet[1669]: E0904 04:19:13.911562    1669 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ffb965cc-c095-4882-8791-04705cf7da12-kube-api-access-lsv98 podName:ffb965cc-c095-4882-8791-04705cf7da12 nodeName:}" failed. No retries permitted until 2025-09-04 04:19:14.911547417 +0000 UTC m=+6.129768847 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-lsv98" (UniqueName: "kubernetes.io/projected/ffb965cc-c095-4882-8791-04705cf7da12-kube-api-access-lsv98") pod "storage-provisioner" (UID: "ffb965cc-c095-4882-8791-04705cf7da12") : configmap "kube-root-ca.crt" not found
	Sep 04 04:19:14 dockerenv-217193 kubelet[1669]: I0904 04:19:14.212673    1669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0e0b3d30-8a2f-4faa-b64f-aca99f78f6e1-kube-proxy\") pod \"kube-proxy-9pb58\" (UID: \"0e0b3d30-8a2f-4faa-b64f-aca99f78f6e1\") " pod="kube-system/kube-proxy-9pb58"
	Sep 04 04:19:14 dockerenv-217193 kubelet[1669]: I0904 04:19:14.212713    1669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q8c5\" (UniqueName: \"kubernetes.io/projected/0e0b3d30-8a2f-4faa-b64f-aca99f78f6e1-kube-api-access-2q8c5\") pod \"kube-proxy-9pb58\" (UID: \"0e0b3d30-8a2f-4faa-b64f-aca99f78f6e1\") " pod="kube-system/kube-proxy-9pb58"
	Sep 04 04:19:14 dockerenv-217193 kubelet[1669]: I0904 04:19:14.212734    1669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9f21759e-20e4-449e-9e58-bd5928a0a693-cni-cfg\") pod \"kindnet-mnwr5\" (UID: \"9f21759e-20e4-449e-9e58-bd5928a0a693\") " pod="kube-system/kindnet-mnwr5"
	Sep 04 04:19:14 dockerenv-217193 kubelet[1669]: I0904 04:19:14.212747    1669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f21759e-20e4-449e-9e58-bd5928a0a693-lib-modules\") pod \"kindnet-mnwr5\" (UID: \"9f21759e-20e4-449e-9e58-bd5928a0a693\") " pod="kube-system/kindnet-mnwr5"
	Sep 04 04:19:14 dockerenv-217193 kubelet[1669]: I0904 04:19:14.212762    1669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnf59\" (UniqueName: \"kubernetes.io/projected/9f21759e-20e4-449e-9e58-bd5928a0a693-kube-api-access-wnf59\") pod \"kindnet-mnwr5\" (UID: \"9f21759e-20e4-449e-9e58-bd5928a0a693\") " pod="kube-system/kindnet-mnwr5"
	Sep 04 04:19:14 dockerenv-217193 kubelet[1669]: I0904 04:19:14.212870    1669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e0b3d30-8a2f-4faa-b64f-aca99f78f6e1-xtables-lock\") pod \"kube-proxy-9pb58\" (UID: \"0e0b3d30-8a2f-4faa-b64f-aca99f78f6e1\") " pod="kube-system/kube-proxy-9pb58"
	Sep 04 04:19:14 dockerenv-217193 kubelet[1669]: I0904 04:19:14.212915    1669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e0b3d30-8a2f-4faa-b64f-aca99f78f6e1-lib-modules\") pod \"kube-proxy-9pb58\" (UID: \"0e0b3d30-8a2f-4faa-b64f-aca99f78f6e1\") " pod="kube-system/kube-proxy-9pb58"
	Sep 04 04:19:14 dockerenv-217193 kubelet[1669]: I0904 04:19:14.212932    1669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f21759e-20e4-449e-9e58-bd5928a0a693-xtables-lock\") pod \"kindnet-mnwr5\" (UID: \"9f21759e-20e4-449e-9e58-bd5928a0a693\") " pod="kube-system/kindnet-mnwr5"
	Sep 04 04:19:14 dockerenv-217193 kubelet[1669]: I0904 04:19:14.318651    1669 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 04 04:19:14 dockerenv-217193 kubelet[1669]: I0904 04:19:14.514454    1669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/adb705d0-af19-4565-8472-9065c0285819-config-volume\") pod \"coredns-66bc5c9577-8gzd7\" (UID: \"adb705d0-af19-4565-8472-9065c0285819\") " pod="kube-system/coredns-66bc5c9577-8gzd7"
	Sep 04 04:19:14 dockerenv-217193 kubelet[1669]: I0904 04:19:14.514487    1669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdzwp\" (UniqueName: \"kubernetes.io/projected/adb705d0-af19-4565-8472-9065c0285819-kube-api-access-gdzwp\") pod \"coredns-66bc5c9577-8gzd7\" (UID: \"adb705d0-af19-4565-8472-9065c0285819\") " pod="kube-system/coredns-66bc5c9577-8gzd7"
	Sep 04 04:19:14 dockerenv-217193 kubelet[1669]: E0904 04:19:14.763902    1669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"454448924ca646766924f8271c3f77cdff4d330b0284a90ceb3fe2128ee12d81\": failed to find network info for sandbox \"454448924ca646766924f8271c3f77cdff4d330b0284a90ceb3fe2128ee12d81\""
	Sep 04 04:19:14 dockerenv-217193 kubelet[1669]: E0904 04:19:14.763974    1669 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"454448924ca646766924f8271c3f77cdff4d330b0284a90ceb3fe2128ee12d81\": failed to find network info for sandbox \"454448924ca646766924f8271c3f77cdff4d330b0284a90ceb3fe2128ee12d81\"" pod="kube-system/coredns-66bc5c9577-8gzd7"
	Sep 04 04:19:14 dockerenv-217193 kubelet[1669]: E0904 04:19:14.763996    1669 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"454448924ca646766924f8271c3f77cdff4d330b0284a90ceb3fe2128ee12d81\": failed to find network info for sandbox \"454448924ca646766924f8271c3f77cdff4d330b0284a90ceb3fe2128ee12d81\"" pod="kube-system/coredns-66bc5c9577-8gzd7"
	Sep 04 04:19:14 dockerenv-217193 kubelet[1669]: E0904 04:19:14.764064    1669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-8gzd7_kube-system(adb705d0-af19-4565-8472-9065c0285819)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-8gzd7_kube-system(adb705d0-af19-4565-8472-9065c0285819)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"454448924ca646766924f8271c3f77cdff4d330b0284a90ceb3fe2128ee12d81\\\": failed to find network info for sandbox \\\"454448924ca646766924f8271c3f77cdff4d330b0284a90ceb3fe2128ee12d81\\\"\"" pod="kube-system/coredns-66bc5c9577-8gzd7" podUID="adb705d0-af19-4565-8472-9065c0285819"
	Sep 04 04:19:14 dockerenv-217193 kubelet[1669]: I0904 04:19:14.924360    1669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-mnwr5" podStartSLOduration=0.924341809 podStartE2EDuration="924.341809ms" podCreationTimestamp="2025-09-04 04:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 04:19:14.924150689 +0000 UTC m=+6.142372141" watchObservedRunningTime="2025-09-04 04:19:14.924341809 +0000 UTC m=+6.142563260"
	Sep 04 04:19:15 dockerenv-217193 kubelet[1669]: I0904 04:19:15.925344    1669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9pb58" podStartSLOduration=1.92532444 podStartE2EDuration="1.92532444s" podCreationTimestamp="2025-09-04 04:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 04:19:14.932413794 +0000 UTC m=+6.150635245" watchObservedRunningTime="2025-09-04 04:19:15.92532444 +0000 UTC m=+7.143545892"
	Sep 04 04:19:18 dockerenv-217193 kubelet[1669]: I0904 04:19:18.170180    1669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=8.170163718 podStartE2EDuration="8.170163718s" podCreationTimestamp="2025-09-04 04:19:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 04:19:15.925503511 +0000 UTC m=+7.143724951" watchObservedRunningTime="2025-09-04 04:19:18.170163718 +0000 UTC m=+9.388385169"
	Sep 04 04:19:19 dockerenv-217193 kubelet[1669]: I0904 04:19:19.333751    1669 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 04 04:19:19 dockerenv-217193 kubelet[1669]: I0904 04:19:19.334723    1669 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	
	
	==> storage-provisioner [23323c4022a5b50574cb1782e4c351d3df7f0268c7878fa1b53aec138739eb3f] <==
	I0904 04:19:15.140512       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0904 04:19:15.147847       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0904 04:19:15.147897       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0904 04:19:15.149847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 04:19:15.153305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0904 04:19:15.153976       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0904 04:19:15.154055       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"63422e87-776d-4d15-98d7-a78b7f6d9354", APIVersion:"v1", ResourceVersion:"381", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' dockerenv-217193_019e0ff1-ee0d-4256-b290-ed338b9cec7c became leader
	I0904 04:19:15.154123       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_dockerenv-217193_019e0ff1-ee0d-4256-b290-ed338b9cec7c!
	W0904 04:19:15.155870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 04:19:15.158406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0904 04:19:15.254677       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_dockerenv-217193_019e0ff1-ee0d-4256-b290-ed338b9cec7c!
	W0904 04:19:17.161650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 04:19:17.166271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 04:19:19.169374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 04:19:19.173537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 04:19:21.176630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 04:19:21.180717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 04:19:23.184426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 04:19:23.187968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 04:19:25.191026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 04:19:25.194447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p dockerenv-217193 -n dockerenv-217193
helpers_test.go:269: (dbg) Run:  kubectl --context dockerenv-217193 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-8gzd7
helpers_test.go:282: ======> post-mortem[TestDockerEnvContainerd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context dockerenv-217193 describe pod coredns-66bc5c9577-8gzd7
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context dockerenv-217193 describe pod coredns-66bc5c9577-8gzd7: exit status 1 (58.987013ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-8gzd7" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context dockerenv-217193 describe pod coredns-66bc5c9577-8gzd7: exit status 1
helpers_test.go:175: Cleaning up "dockerenv-217193" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-217193
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-217193: (1.792673904s)
--- FAIL: TestDockerEnvContainerd (40.58s)

                                                
                                    

Test pass (306/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 13.22
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.0/json-events 11.6
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.06
18 TestDownloadOnly/v1.34.0/DeleteAll 0.2
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 1.11
21 TestBinaryMirror 0.77
22 TestOffline 59.29
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 154.5
29 TestAddons/serial/Volcano 70.84
31 TestAddons/serial/GCPAuth/Namespaces 0.11
32 TestAddons/serial/GCPAuth/FakeCredentials 10.43
35 TestAddons/parallel/Registry 15.03
36 TestAddons/parallel/RegistryCreds 0.63
37 TestAddons/parallel/Ingress 21.48
38 TestAddons/parallel/InspektorGadget 5.26
39 TestAddons/parallel/MetricsServer 5.71
41 TestAddons/parallel/CSI 53.2
42 TestAddons/parallel/Headlamp 17.37
43 TestAddons/parallel/CloudSpanner 5.53
44 TestAddons/parallel/LocalPath 52.84
45 TestAddons/parallel/NvidiaDevicePlugin 6.68
46 TestAddons/parallel/Yakd 10.69
47 TestAddons/parallel/AmdGpuDevicePlugin 6.46
48 TestAddons/StoppedEnableDisable 12.12
49 TestCertOptions 33.98
50 TestCertExpiration 216.28
52 TestForceSystemdFlag 30.13
53 TestForceSystemdEnv 35.73
55 TestKVMDriverInstallOrUpdate 1.4
59 TestErrorSpam/setup 21.87
60 TestErrorSpam/start 0.56
61 TestErrorSpam/status 0.85
62 TestErrorSpam/pause 1.46
63 TestErrorSpam/unpause 1.72
64 TestErrorSpam/stop 2.41
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 40.52
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 5.63
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.06
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.74
76 TestFunctional/serial/CacheCmd/cache/add_local 1.86
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.47
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
84 TestFunctional/serial/ExtraConfig 40.98
85 TestFunctional/serial/ComponentHealth 0.06
86 TestFunctional/serial/LogsCmd 1.31
87 TestFunctional/serial/LogsFileCmd 1.32
88 TestFunctional/serial/InvalidService 4.32
90 TestFunctional/parallel/ConfigCmd 0.4
91 TestFunctional/parallel/DashboardCmd 13
92 TestFunctional/parallel/DryRun 0.37
93 TestFunctional/parallel/InternationalLanguage 0.16
94 TestFunctional/parallel/StatusCmd 0.99
98 TestFunctional/parallel/ServiceCmdConnect 11.66
99 TestFunctional/parallel/AddonsCmd 0.14
100 TestFunctional/parallel/PersistentVolumeClaim 30.55
102 TestFunctional/parallel/SSHCmd 0.58
103 TestFunctional/parallel/CpCmd 1.81
104 TestFunctional/parallel/MySQL 21.73
105 TestFunctional/parallel/FileSync 0.28
106 TestFunctional/parallel/CertSync 1.79
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.54
114 TestFunctional/parallel/License 0.36
115 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
116 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
117 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
118 TestFunctional/parallel/ServiceCmd/DeployApp 8.21
120 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.42
121 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.22
124 TestFunctional/parallel/ServiceCmd/List 0.47
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.47
126 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
127 TestFunctional/parallel/ServiceCmd/Format 0.43
128 TestFunctional/parallel/ServiceCmd/URL 0.4
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
130 TestFunctional/parallel/ProfileCmd/profile_list 0.38
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
133 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
137 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
138 TestFunctional/parallel/MountCmd/any-port 16.8
139 TestFunctional/parallel/Version/short 0.06
140 TestFunctional/parallel/Version/components 0.93
141 TestFunctional/parallel/ImageCommands/ImageListShort 0.35
142 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
143 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
144 TestFunctional/parallel/ImageCommands/ImageListYaml 0.39
145 TestFunctional/parallel/ImageCommands/ImageBuild 3.97
146 TestFunctional/parallel/ImageCommands/Setup 1.71
147 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.34
148 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.21
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.99
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.42
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.73
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.44
154 TestFunctional/parallel/MountCmd/specific-port 1.8
155 TestFunctional/parallel/MountCmd/VerifyCleanup 1.6
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 94.22
164 TestMultiControlPlane/serial/DeployApp 17.12
165 TestMultiControlPlane/serial/PingHostFromPods 1.04
166 TestMultiControlPlane/serial/AddWorkerNode 12.25
167 TestMultiControlPlane/serial/NodeLabels 0.07
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.87
169 TestMultiControlPlane/serial/CopyFile 15.62
170 TestMultiControlPlane/serial/StopSecondaryNode 12.52
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.65
172 TestMultiControlPlane/serial/RestartSecondaryNode 9.43
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.83
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 92.23
175 TestMultiControlPlane/serial/DeleteSecondaryNode 9.03
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.64
177 TestMultiControlPlane/serial/StopCluster 35.6
178 TestMultiControlPlane/serial/RestartCluster 56.82
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.67
180 TestMultiControlPlane/serial/AddSecondaryNode 32.92
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
185 TestJSONOutput/start/Command 56.91
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.64
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.56
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.71
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.2
210 TestKicCustomNetwork/create_custom_network 35.41
211 TestKicCustomNetwork/use_default_bridge_network 24.18
212 TestKicExistingNetwork 24.89
213 TestKicCustomSubnet 24.84
214 TestKicStaticIP 27.12
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 53.36
219 TestMountStart/serial/StartWithMountFirst 5.41
220 TestMountStart/serial/VerifyMountFirst 0.24
221 TestMountStart/serial/StartWithMountSecond 5.68
222 TestMountStart/serial/VerifyMountSecond 0.24
223 TestMountStart/serial/DeleteFirst 1.58
224 TestMountStart/serial/VerifyMountPostDelete 0.24
225 TestMountStart/serial/Stop 1.17
226 TestMountStart/serial/RestartStopped 7.17
227 TestMountStart/serial/VerifyMountPostStop 0.24
230 TestMultiNode/serial/FreshStart2Nodes 60.68
231 TestMultiNode/serial/DeployApp2Nodes 18.89
232 TestMultiNode/serial/PingHostFrom2Pods 0.71
233 TestMultiNode/serial/AddNode 10.5
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.64
236 TestMultiNode/serial/CopyFile 8.9
237 TestMultiNode/serial/StopNode 2.07
238 TestMultiNode/serial/StartAfterStop 6.6
239 TestMultiNode/serial/RestartKeepsNodes 77.21
240 TestMultiNode/serial/DeleteNode 5.08
241 TestMultiNode/serial/StopMultiNode 23.81
242 TestMultiNode/serial/RestartMultiNode 45.98
243 TestMultiNode/serial/ValidateNameConflict 23.98
248 TestPreload 132.31
250 TestScheduledStopUnix 98.36
253 TestInsufficientStorage 12.01
254 TestRunningBinaryUpgrade 50.82
256 TestKubernetesUpgrade 320.07
257 TestMissingContainerUpgrade 140.69
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
263 TestNoKubernetes/serial/StartWithK8s 33.06
268 TestNetworkPlugins/group/false 7.48
272 TestNoKubernetes/serial/StartWithStopK8s 23.27
273 TestNoKubernetes/serial/Start 5.44
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
275 TestNoKubernetes/serial/ProfileList 1.63
276 TestNoKubernetes/serial/Stop 1.48
277 TestNoKubernetes/serial/StartNoArgs 6.8
278 TestStoppedBinaryUpgrade/Setup 3.05
279 TestStoppedBinaryUpgrade/Upgrade 86.43
280 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
281 TestStoppedBinaryUpgrade/MinikubeLogs 1.05
290 TestPause/serial/Start 88.55
291 TestNetworkPlugins/group/auto/Start 52.97
292 TestNetworkPlugins/group/kindnet/Start 46.82
293 TestNetworkPlugins/group/auto/KubeletFlags 0.25
294 TestNetworkPlugins/group/auto/NetCatPod 9.23
295 TestNetworkPlugins/group/auto/DNS 0.13
296 TestNetworkPlugins/group/auto/Localhost 0.12
297 TestNetworkPlugins/group/auto/HairPin 0.12
298 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
299 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
300 TestNetworkPlugins/group/kindnet/NetCatPod 9.2
301 TestNetworkPlugins/group/kindnet/DNS 0.12
302 TestNetworkPlugins/group/kindnet/Localhost 0.1
303 TestNetworkPlugins/group/kindnet/HairPin 0.11
304 TestNetworkPlugins/group/calico/Start 52.3
305 TestPause/serial/SecondStartNoReconfiguration 6.48
306 TestPause/serial/Pause 0.74
307 TestPause/serial/VerifyStatus 0.33
308 TestPause/serial/Unpause 0.67
309 TestPause/serial/PauseAgain 0.79
310 TestPause/serial/DeletePaused 2.59
311 TestPause/serial/VerifyDeletedResources 16.28
312 TestNetworkPlugins/group/custom-flannel/Start 42.45
313 TestNetworkPlugins/group/flannel/Start 51.04
314 TestNetworkPlugins/group/calico/ControllerPod 6.01
315 TestNetworkPlugins/group/calico/KubeletFlags 0.27
316 TestNetworkPlugins/group/calico/NetCatPod 9.19
317 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
318 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.22
319 TestNetworkPlugins/group/calico/DNS 0.21
320 TestNetworkPlugins/group/calico/Localhost 0.1
321 TestNetworkPlugins/group/calico/HairPin 0.1
322 TestNetworkPlugins/group/custom-flannel/DNS 0.12
323 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
324 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
325 TestNetworkPlugins/group/flannel/ControllerPod 6.01
326 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
327 TestNetworkPlugins/group/flannel/NetCatPod 9.24
328 TestNetworkPlugins/group/bridge/Start 43.71
329 TestNetworkPlugins/group/enable-default-cni/Start 69.78
330 TestNetworkPlugins/group/flannel/DNS 0.17
331 TestNetworkPlugins/group/flannel/Localhost 0.13
332 TestNetworkPlugins/group/flannel/HairPin 0.13
334 TestStartStop/group/old-k8s-version/serial/FirstStart 56.44
336 TestStartStop/group/no-preload/serial/FirstStart 68.29
337 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
338 TestNetworkPlugins/group/bridge/NetCatPod 10.23
339 TestNetworkPlugins/group/bridge/DNS 0.12
340 TestNetworkPlugins/group/bridge/Localhost 0.1
341 TestNetworkPlugins/group/bridge/HairPin 0.1
343 TestStartStop/group/embed-certs/serial/FirstStart 46.47
344 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
345 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.19
346 TestStartStop/group/old-k8s-version/serial/DeployApp 8.32
347 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
348 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
349 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
350 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.09
351 TestStartStop/group/old-k8s-version/serial/Stop 12.02
352 TestStartStop/group/no-preload/serial/DeployApp 8.3
354 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 42.59
355 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
356 TestStartStop/group/old-k8s-version/serial/SecondStart 52.87
357 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.83
358 TestStartStop/group/no-preload/serial/Stop 12.38
359 TestStartStop/group/embed-certs/serial/DeployApp 10.28
360 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
361 TestStartStop/group/no-preload/serial/SecondStart 47.7
362 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.98
363 TestStartStop/group/embed-certs/serial/Stop 13.02
364 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
365 TestStartStop/group/embed-certs/serial/SecondStart 47.23
366 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.25
367 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.85
368 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.94
369 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
370 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
371 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
372 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
373 TestStartStop/group/old-k8s-version/serial/Pause 2.84
374 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
375 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 51.22
376 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
378 TestStartStop/group/newest-cni/serial/FirstStart 33.41
379 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.32
380 TestStartStop/group/no-preload/serial/Pause 3.73
381 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
382 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
383 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
384 TestStartStop/group/embed-certs/serial/Pause 2.74
385 TestStartStop/group/newest-cni/serial/DeployApp 0
386 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.77
387 TestStartStop/group/newest-cni/serial/Stop 1.19
388 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
389 TestStartStop/group/newest-cni/serial/SecondStart 14.28
390 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
391 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
392 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
393 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
394 TestStartStop/group/newest-cni/serial/Pause 2.68
395 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
396 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
397 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.58
x
+
TestDownloadOnly/v1.28.0/json-events (13.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-175815 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-175815 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (13.215120925s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (13.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0904 04:13:17.163843  389671 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I0904 04:13:17.163958  389671 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21409-385918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-175815
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-175815: exit status 85 (60.111597ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-175815 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-175815 │ jenkins │ v1.36.0 │ 04 Sep 25 04:13 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 04:13:03
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 04:13:03.991747  389683 out.go:360] Setting OutFile to fd 1 ...
	I0904 04:13:03.992023  389683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 04:13:03.992033  389683 out.go:374] Setting ErrFile to fd 2...
	I0904 04:13:03.992040  389683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 04:13:03.992246  389683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-385918/.minikube/bin
	W0904 04:13:03.992399  389683 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21409-385918/.minikube/config/config.json: open /home/jenkins/minikube-integration/21409-385918/.minikube/config/config.json: no such file or directory
	I0904 04:13:03.993011  389683 out.go:368] Setting JSON to true
	I0904 04:13:03.994001  389683 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6927,"bootTime":1756952257,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 04:13:03.994105  389683 start.go:140] virtualization: kvm guest
	I0904 04:13:03.996365  389683 out.go:99] [download-only-175815] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	W0904 04:13:03.996532  389683 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21409-385918/.minikube/cache/preloaded-tarball: no such file or directory
	I0904 04:13:03.996542  389683 notify.go:220] Checking for updates...
	I0904 04:13:03.997621  389683 out.go:171] MINIKUBE_LOCATION=21409
	I0904 04:13:03.998746  389683 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 04:13:03.999938  389683 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21409-385918/kubeconfig
	I0904 04:13:04.001067  389683 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-385918/.minikube
	I0904 04:13:04.002574  389683 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0904 04:13:04.004525  389683 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0904 04:13:04.004769  389683 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 04:13:04.027723  389683 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 04:13:04.027852  389683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 04:13:04.073448  389683 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:62 SystemTime:2025-09-04 04:13:04.064123432 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 04:13:04.073572  389683 docker.go:318] overlay module found
	I0904 04:13:04.075082  389683 out.go:99] Using the docker driver based on user configuration
	I0904 04:13:04.075119  389683 start.go:304] selected driver: docker
	I0904 04:13:04.075127  389683 start.go:918] validating driver "docker" against <nil>
	I0904 04:13:04.075218  389683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 04:13:04.122160  389683 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:62 SystemTime:2025-09-04 04:13:04.11265361 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 04:13:04.122340  389683 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0904 04:13:04.122883  389683 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0904 04:13:04.123027  389683 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0904 04:13:04.124721  389683 out.go:171] Using Docker driver with root privileges
	I0904 04:13:04.125603  389683 cni.go:84] Creating CNI manager for ""
	I0904 04:13:04.125667  389683 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0904 04:13:04.125678  389683 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0904 04:13:04.125754  389683 start.go:348] cluster config:
	{Name:download-only-175815 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-175815 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 04:13:04.126915  389683 out.go:99] Starting "download-only-175815" primary control-plane node in "download-only-175815" cluster
	I0904 04:13:04.126937  389683 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0904 04:13:04.127908  389683 out.go:99] Pulling base image v0.0.47-1756936034-21409 ...
	I0904 04:13:04.127930  389683 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I0904 04:13:04.128044  389683 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local docker daemon
	I0904 04:13:04.143891  389683 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc to local cache
	I0904 04:13:04.144101  389683 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local cache directory
	I0904 04:13:04.144207  389683 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc to local cache
	I0904 04:13:04.490339  389683 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I0904 04:13:04.490389  389683 cache.go:58] Caching tarball of preloaded images
	I0904 04:13:04.490592  389683 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I0904 04:13:04.492588  389683 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0904 04:13:04.492615  389683 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 ...
	I0904 04:13:04.594047  389683 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2746dfda401436a5341e0500068bf339 -> /home/jenkins/minikube-integration/21409-385918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I0904 04:13:12.559707  389683 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc as a tarball
	
	
	* The control-plane node download-only-175815 host does not exist
	  To start a cluster, run: "minikube start -p download-only-175815"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-175815
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (11.6s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-122384 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-122384 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (11.598091209s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (11.60s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0904 04:13:29.161671  389671 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
I0904 04:13:29.161720  389671 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21409-385918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-122384
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-122384: exit status 85 (61.148381ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-175815 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-175815 │ jenkins │ v1.36.0 │ 04 Sep 25 04:13 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.36.0 │ 04 Sep 25 04:13 UTC │ 04 Sep 25 04:13 UTC │
	│ delete  │ -p download-only-175815                                                                                                                                                               │ download-only-175815 │ jenkins │ v1.36.0 │ 04 Sep 25 04:13 UTC │ 04 Sep 25 04:13 UTC │
	│ start   │ -o=json --download-only -p download-only-122384 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-122384 │ jenkins │ v1.36.0 │ 04 Sep 25 04:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 04:13:17
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 04:13:17.605212  390050 out.go:360] Setting OutFile to fd 1 ...
	I0904 04:13:17.605334  390050 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 04:13:17.605343  390050 out.go:374] Setting ErrFile to fd 2...
	I0904 04:13:17.605347  390050 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 04:13:17.605556  390050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-385918/.minikube/bin
	I0904 04:13:17.606144  390050 out.go:368] Setting JSON to true
	I0904 04:13:17.607069  390050 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6941,"bootTime":1756952257,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 04:13:17.607168  390050 start.go:140] virtualization: kvm guest
	I0904 04:13:17.608914  390050 out.go:99] [download-only-122384] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 04:13:17.609027  390050 notify.go:220] Checking for updates...
	I0904 04:13:17.610111  390050 out.go:171] MINIKUBE_LOCATION=21409
	I0904 04:13:17.611412  390050 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 04:13:17.612504  390050 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21409-385918/kubeconfig
	I0904 04:13:17.613627  390050 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-385918/.minikube
	I0904 04:13:17.614691  390050 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0904 04:13:17.616611  390050 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0904 04:13:17.616849  390050 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 04:13:17.638041  390050 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 04:13:17.638119  390050 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 04:13:17.682861  390050 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:48 SystemTime:2025-09-04 04:13:17.67429734 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 04:13:17.683017  390050 docker.go:318] overlay module found
	I0904 04:13:17.684709  390050 out.go:99] Using the docker driver based on user configuration
	I0904 04:13:17.684749  390050 start.go:304] selected driver: docker
	I0904 04:13:17.684759  390050 start.go:918] validating driver "docker" against <nil>
	I0904 04:13:17.684836  390050 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 04:13:17.729543  390050 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:48 SystemTime:2025-09-04 04:13:17.720306811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 04:13:17.729716  390050 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0904 04:13:17.730211  390050 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0904 04:13:17.730357  390050 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0904 04:13:17.732229  390050 out.go:171] Using Docker driver with root privileges
	I0904 04:13:17.733331  390050 cni.go:84] Creating CNI manager for ""
	I0904 04:13:17.733439  390050 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0904 04:13:17.733451  390050 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0904 04:13:17.733530  390050 start.go:348] cluster config:
	{Name:download-only-122384 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-122384 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 04:13:17.734733  390050 out.go:99] Starting "download-only-122384" primary control-plane node in "download-only-122384" cluster
	I0904 04:13:17.734753  390050 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0904 04:13:17.735814  390050 out.go:99] Pulling base image v0.0.47-1756936034-21409 ...
	I0904 04:13:17.735840  390050 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0904 04:13:17.735971  390050 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local docker daemon
	I0904 04:13:17.752588  390050 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc to local cache
	I0904 04:13:17.752712  390050 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local cache directory
	I0904 04:13:17.752730  390050 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local cache directory, skipping pull
	I0904 04:13:17.752737  390050 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc exists in cache, skipping pull
	I0904 04:13:17.752744  390050 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc as a tarball
	I0904 04:13:18.101265  390050 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0904 04:13:18.101306  390050 cache.go:58] Caching tarball of preloaded images
	I0904 04:13:18.101488  390050 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0904 04:13:18.103164  390050 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0904 04:13:18.103186  390050 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 ...
	I0904 04:13:18.196964  390050 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2b7b36e7513c2e517ecf49b6f3ce02cf -> /home/jenkins/minikube-integration/21409-385918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0904 04:13:27.581022  390050 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 ...
	I0904 04:13:27.581147  390050 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21409-385918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-122384 host does not exist
	  To start a cluster, run: "minikube start -p download-only-122384"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-122384
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.11s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-642932 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-642932" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-642932
--- PASS: TestDownloadOnlyKic (1.11s)

                                                
                                    
x
+
TestBinaryMirror (0.77s)

                                                
                                                
=== RUN   TestBinaryMirror
I0904 04:13:30.929265  389671 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-703479 --alsologtostderr --binary-mirror http://127.0.0.1:37379 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-703479" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-703479
--- PASS: TestBinaryMirror (0.77s)

                                                
                                    
x
+
TestOffline (59.29s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-296304 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-296304 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (56.949997554s)
helpers_test.go:175: Cleaning up "offline-containerd-296304" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-296304
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-296304: (2.337635097s)
--- PASS: TestOffline (59.29s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-919243
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-919243: exit status 85 (52.371902ms)

                                                
                                                
-- stdout --
	* Profile "addons-919243" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-919243"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-919243
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-919243: exit status 85 (53.637823ms)

                                                
                                                
-- stdout --
	* Profile "addons-919243" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-919243"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (154.5s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-919243 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-919243 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m34.499800579s)
--- PASS: TestAddons/Setup (154.50s)

                                                
                                    
x
+
TestAddons/serial/Volcano (70.84s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 11.358366ms
addons_test.go:876: volcano-admission stabilized in 11.388313ms
addons_test.go:868: volcano-scheduler stabilized in 11.473741ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-799f64f894-fqmc7" [bf59d69f-6eb4-4383-8016-ca63f8c8853a] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003747269s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-589c7dd587-f58tc" [2a321eb0-54c3-4ab4-931e-8fa899c9dc11] Pending / Ready:ContainersNotReady (containers with unready status: [admission]) / ContainersReady:ContainersNotReady (containers with unready status: [admission])
helpers_test.go:352: "volcano-admission-589c7dd587-f58tc" [2a321eb0-54c3-4ab4-931e-8fa899c9dc11] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 36.003354013s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-7dc6969b45-qkq8l" [c4918e84-5890-44c0-a800-2b7332fe8408] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003249786s
addons_test.go:903: (dbg) Run:  kubectl --context addons-919243 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-919243 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-919243 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [09a289be-b5ae-4ac3-8d6a-21aac7878919] Pending
helpers_test.go:352: "test-job-nginx-0" [09a289be-b5ae-4ac3-8d6a-21aac7878919] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [09a289be-b5ae-4ac3-8d6a-21aac7878919] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003838697s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-919243 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-919243 addons disable volcano --alsologtostderr -v=1: (11.474991746s)
--- PASS: TestAddons/serial/Volcano (70.84s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-919243 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-919243 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.43s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-919243 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-919243 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [fe14ae8d-3ce7-46a8-a3f7-b4127befa690] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [fe14ae8d-3ce7-46a8-a3f7-b4127befa690] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003227015s
addons_test.go:694: (dbg) Run:  kubectl --context addons-919243 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-919243 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-919243 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.43s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.280936ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-rtchm" [5e87baa6-0b25-4007-9b68-11552def10c9] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002204397s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-xdlwq" [94871c54-b405-4cd0-8958-e0e3f5287509] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003265356s
addons_test.go:392: (dbg) Run:  kubectl --context addons-919243 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-919243 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-919243 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.227713249s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-919243 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-919243 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.03s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.63s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.217958ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-919243
addons_test.go:332: (dbg) Run:  kubectl --context addons-919243 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-919243 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.63s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-919243 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-919243 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-919243 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [240c1c7f-8cef-4edf-b461-95c87483d13a] Pending
helpers_test.go:352: "nginx" [240c1c7f-8cef-4edf-b461-95c87483d13a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [240c1c7f-8cef-4edf-b461-95c87483d13a] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003345805s
I0904 04:18:17.775716  389671 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-919243 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-919243 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-919243 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-919243 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-919243 addons disable ingress-dns --alsologtostderr -v=1: (1.343249476s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-919243 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-919243 addons disable ingress --alsologtostderr -v=1: (7.665317224s)
--- PASS: TestAddons/parallel/Ingress (21.48s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-89qzc" [2a05038a-c92e-4ead-95a9-76985e13ef3d] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003142632s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-919243 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.71s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.002922ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-xxz4b" [5b9b5ace-837f-4cfb-9054-040b38ca81ea] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004239977s
addons_test.go:463: (dbg) Run:  kubectl --context addons-919243 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-919243 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.71s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.2s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.258105ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-919243 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-919243 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [8dfc09a9-61ff-4d02-b433-74e98f175099] Pending
2025/09/04 04:17:50 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:352: "task-pv-pod" [8dfc09a9-61ff-4d02-b433-74e98f175099] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [8dfc09a9-61ff-4d02-b433-74e98f175099] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003488808s
addons_test.go:572: (dbg) Run:  kubectl --context addons-919243 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-919243 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-919243 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-919243 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-919243 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-919243 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-919243 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [a1b53bfd-a9e3-4f92-b002-5a8f01687429] Pending
helpers_test.go:352: "task-pv-pod-restore" [a1b53bfd-a9e3-4f92-b002-5a8f01687429] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [a1b53bfd-a9e3-4f92-b002-5a8f01687429] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003619371s
addons_test.go:614: (dbg) Run:  kubectl --context addons-919243 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-919243 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-919243 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-919243 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-919243 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-919243 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.66603574s)
--- PASS: TestAddons/parallel/CSI (53.20s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.37s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-919243 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6f46646d79-bs97q" [52336ab5-528e-4952-a7cf-23c2b4cecf54] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6f46646d79-bs97q" [52336ab5-528e-4952-a7cf-23c2b4cecf54] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004062508s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-919243 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-919243 addons disable headlamp --alsologtostderr -v=1: (5.626640543s)
--- PASS: TestAddons/parallel/Headlamp (17.37s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-c55d4cb6d-pst69" [3ca9a6fe-55b2-4ed5-a5a9-25657f9280ac] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003317406s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-919243 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.84s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-919243 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-919243 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-919243 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [7ba60f01-699d-4ea3-8791-6da045c2f5b0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [7ba60f01-699d-4ea3-8791-6da045c2f5b0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [7ba60f01-699d-4ea3-8791-6da045c2f5b0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003239331s
addons_test.go:967: (dbg) Run:  kubectl --context addons-919243 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-919243 ssh "cat /opt/local-path-provisioner/pvc-46c6e5a2-40ef-4973-a653-00cfaf5f887e_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-919243 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-919243 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-919243 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-919243 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.992001584s)
--- PASS: TestAddons/parallel/LocalPath (52.84s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.68s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-gq7p5" [e8f45d99-a942-43a2-b391-04e7b4abda59] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004222965s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-919243 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.68s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-g428j" [48310248-f132-4947-816a-70d1abdc45ae] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.002907571s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-919243 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-919243 addons disable yakd --alsologtostderr -v=1: (5.681462287s)
--- PASS: TestAddons/parallel/Yakd (10.69s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.46s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
I0904 04:17:36.496977  389671 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-vtkr4" [8e5a1000-0bf7-41a0-b2bc-4dddebfcf214] Running
I0904 04:17:36.500173  389671 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0904 04:17:36.500197  389671 kapi.go:107] duration metric: took 3.247298ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.004175489s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-919243 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.46s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.12s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-919243
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-919243: (11.87571816s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-919243
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-919243
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-919243
--- PASS: TestAddons/StoppedEnableDisable (12.12s)

                                                
                                    
x
+
TestCertOptions (33.98s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-245759 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-245759 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (25.831185297s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-245759 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-245759 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-245759 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-245759" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-245759
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-245759: (7.440478514s)
--- PASS: TestCertOptions (33.98s)

                                                
                                    
x
+
TestCertExpiration (216.28s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-770779 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E0904 04:43:06.912960  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/functional-874981/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-770779 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (28.149866486s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-770779 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-770779 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (5.840043383s)
helpers_test.go:175: Cleaning up "cert-expiration-770779" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-770779
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-770779: (2.286499266s)
--- PASS: TestCertExpiration (216.28s)

                                                
                                    
x
+
TestForceSystemdFlag (30.13s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-823299 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-823299 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (27.40430805s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-823299 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-823299" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-823299
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-823299: (2.328147499s)
--- PASS: TestForceSystemdFlag (30.13s)

                                                
                                    
x
+
TestForceSystemdEnv (35.73s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-396799 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-396799 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (33.113968851s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-396799 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-396799" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-396799
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-396799: (2.327056522s)
--- PASS: TestForceSystemdEnv (35.73s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.4s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0904 04:42:54.207683  389671 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0904 04:42:54.207838  389671 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_containerd_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0904 04:42:54.241513  389671 install.go:62] docker-machine-driver-kvm2: exit status 1
W0904 04:42:54.241652  389671 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0904 04:42:54.241701  389671 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4018303861/001/docker-machine-driver-kvm2
I0904 04:42:54.512543  389671 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate4018303861/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc00056c3c0 gz:0xc00056c3c8 tar:0xc00056c350 tar.bz2:0xc00056c370 tar.gz:0xc00056c380 tar.xz:0xc00056c3a0 tar.zst:0xc00056c3b0 tbz2:0xc00056c370 tgz:0xc00056c380 txz:0xc00056c3a0 tzst:0xc00056c3b0 xz:0xc00056c3d0 zip:0xc00056c3e0 zst:0xc00056c3d8] Getters:map[file:0xc000b77d20 http:0xc0017cac30 https:0xc0017cac80] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0904 04:42:54.512600  389671 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4018303861/001/docker-machine-driver-kvm2
I0904 04:42:55.165769  389671 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0904 04:42:55.165911  389671 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_containerd_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0904 04:42:55.204669  389671 install.go:137] /home/jenkins/workspace/Docker_Linux_containerd_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0904 04:42:55.204716  389671 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0904 04:42:55.204778  389671 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0904 04:42:55.204839  389671 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4018303861/002/docker-machine-driver-kvm2
I0904 04:42:55.230895  389671 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate4018303861/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc00056c3c0 gz:0xc00056c3c8 tar:0xc00056c350 tar.bz2:0xc00056c370 tar.gz:0xc00056c380 tar.xz:0xc00056c3a0 tar.zst:0xc00056c3b0 tbz2:0xc00056c370 tgz:0xc00056c380 txz:0xc00056c3a0 tzst:0xc00056c3b0 xz:0xc00056c3d0 zip:0xc00056c3e0 zst:0xc00056c3d8] Getters:map[file:0xc0008f3710 http:0xc0017cadc0 https:0xc0017cae10] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0904 04:42:55.230957  389671 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4018303861/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.40s)

                                                
                                    
x
+
TestErrorSpam/setup (21.87s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-332741 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-332741 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-332741 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-332741 --driver=docker  --container-runtime=containerd: (21.869461966s)
--- PASS: TestErrorSpam/setup (21.87s)

                                                
                                    
x
+
TestErrorSpam/start (0.56s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-332741 --log_dir /tmp/nospam-332741 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-332741 --log_dir /tmp/nospam-332741 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-332741 --log_dir /tmp/nospam-332741 start --dry-run
--- PASS: TestErrorSpam/start (0.56s)

                                                
                                    
x
+
TestErrorSpam/status (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-332741 --log_dir /tmp/nospam-332741 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-332741 --log_dir /tmp/nospam-332741 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-332741 --log_dir /tmp/nospam-332741 status
--- PASS: TestErrorSpam/status (0.85s)

                                                
                                    
x
+
TestErrorSpam/pause (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-332741 --log_dir /tmp/nospam-332741 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-332741 --log_dir /tmp/nospam-332741 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-332741 --log_dir /tmp/nospam-332741 pause
--- PASS: TestErrorSpam/pause (1.46s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-332741 --log_dir /tmp/nospam-332741 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-332741 --log_dir /tmp/nospam-332741 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-332741 --log_dir /tmp/nospam-332741 unpause
--- PASS: TestErrorSpam/unpause (1.72s)

                                                
                                    
x
+
TestErrorSpam/stop (2.41s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-332741 --log_dir /tmp/nospam-332741 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-332741 --log_dir /tmp/nospam-332741 stop: (2.228419912s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-332741 --log_dir /tmp/nospam-332741 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-332741 --log_dir /tmp/nospam-332741 stop
--- PASS: TestErrorSpam/stop (2.41s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21409-385918/.minikube/files/etc/test/nested/copy/389671/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (40.52s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-874981 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-874981 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (40.515400357s)
--- PASS: TestFunctional/serial/StartWithProxy (40.52s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.63s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0904 04:20:42.810126  389671 config.go:182] Loaded profile config "functional-874981": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-874981 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-874981 --alsologtostderr -v=8: (5.632819357s)
functional_test.go:678: soft start took 5.633569472s for "functional-874981" cluster.
I0904 04:20:48.443372  389671 config.go:182] Loaded profile config "functional-874981": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (5.63s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-874981 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-874981 cache add registry.k8s.io/pause:3.3: (1.000391468s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-874981 /tmp/TestFunctionalserialCacheCmdcacheadd_local2409271517/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 cache add minikube-local-cache-test:functional-874981
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-874981 cache add minikube-local-cache-test:functional-874981: (1.560382269s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 cache delete minikube-local-cache-test:functional-874981
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-874981
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-874981 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (259.835147ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 kubectl -- --context functional-874981 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-874981 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.98s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-874981 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0904 04:21:06.260496  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/addons-919243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:21:06.266884  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/addons-919243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:21:06.278216  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/addons-919243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:21:06.299582  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/addons-919243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:21:06.340979  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/addons-919243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:21:06.422385  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/addons-919243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:21:06.583892  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/addons-919243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:21:06.905564  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/addons-919243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:21:07.547566  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/addons-919243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:21:08.829259  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/addons-919243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:21:11.391020  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/addons-919243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:21:16.512329  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/addons-919243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:21:26.754382  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/addons-919243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-874981 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.977265367s)
functional_test.go:776: restart took 40.977389713s for "functional-874981" cluster.
I0904 04:21:36.269895  389671 config.go:182] Loaded profile config "functional-874981": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (40.98s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-874981 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-874981 logs: (1.306662453s)
--- PASS: TestFunctional/serial/LogsCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 logs --file /tmp/TestFunctionalserialLogsFileCmd4054266235/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-874981 logs --file /tmp/TestFunctionalserialLogsFileCmd4054266235/001/logs.txt: (1.322212536s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.32s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-874981 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-874981
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-874981: exit status 115 (313.771755ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30283 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-874981 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.32s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-874981 config get cpus: exit status 14 (79.535611ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-874981 config get cpus: exit status 14 (74.731652ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-874981 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-874981 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 440138: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.00s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-874981 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-874981 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (167.462582ms)

                                                
                                                
-- stdout --
	* [functional-874981] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-385918/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-385918/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 04:22:06.819182  439531 out.go:360] Setting OutFile to fd 1 ...
	I0904 04:22:06.819436  439531 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 04:22:06.819446  439531 out.go:374] Setting ErrFile to fd 2...
	I0904 04:22:06.819451  439531 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 04:22:06.819619  439531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-385918/.minikube/bin
	I0904 04:22:06.820244  439531 out.go:368] Setting JSON to false
	I0904 04:22:06.821561  439531 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7470,"bootTime":1756952257,"procs":244,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 04:22:06.821640  439531 start.go:140] virtualization: kvm guest
	I0904 04:22:06.824143  439531 out.go:179] * [functional-874981] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 04:22:06.825281  439531 out.go:179]   - MINIKUBE_LOCATION=21409
	I0904 04:22:06.825310  439531 notify.go:220] Checking for updates...
	I0904 04:22:06.827365  439531 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 04:22:06.828630  439531 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-385918/kubeconfig
	I0904 04:22:06.830032  439531 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-385918/.minikube
	I0904 04:22:06.831348  439531 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 04:22:06.832504  439531 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 04:22:06.834151  439531 config.go:182] Loaded profile config "functional-874981": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0904 04:22:06.834744  439531 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 04:22:06.861946  439531 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 04:22:06.862055  439531 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 04:22:06.925902  439531 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-09-04 04:22:06.915493551 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 04:22:06.926041  439531 docker.go:318] overlay module found
	I0904 04:22:06.927756  439531 out.go:179] * Using the docker driver based on existing profile
	I0904 04:22:06.928833  439531 start.go:304] selected driver: docker
	I0904 04:22:06.928848  439531 start.go:918] validating driver "docker" against &{Name:functional-874981 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-874981 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 04:22:06.928970  439531 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 04:22:06.930932  439531 out.go:203] 
	W0904 04:22:06.931938  439531 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0904 04:22:06.932928  439531 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-874981 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-874981 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-874981 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (158.874642ms)

                                                
                                                
-- stdout --
	* [functional-874981] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-385918/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-385918/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 04:22:07.194142  439718 out.go:360] Setting OutFile to fd 1 ...
	I0904 04:22:07.194444  439718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 04:22:07.194487  439718 out.go:374] Setting ErrFile to fd 2...
	I0904 04:22:07.194502  439718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 04:22:07.194912  439718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-385918/.minikube/bin
	I0904 04:22:07.195585  439718 out.go:368] Setting JSON to false
	I0904 04:22:07.196755  439718 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7470,"bootTime":1756952257,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 04:22:07.196859  439718 start.go:140] virtualization: kvm guest
	I0904 04:22:07.198859  439718 out.go:179] * [functional-874981] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	I0904 04:22:07.200480  439718 out.go:179]   - MINIKUBE_LOCATION=21409
	I0904 04:22:07.200511  439718 notify.go:220] Checking for updates...
	I0904 04:22:07.202496  439718 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 04:22:07.204144  439718 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-385918/kubeconfig
	I0904 04:22:07.205253  439718 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-385918/.minikube
	I0904 04:22:07.206873  439718 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 04:22:07.208027  439718 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 04:22:07.209747  439718 config.go:182] Loaded profile config "functional-874981": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0904 04:22:07.210495  439718 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 04:22:07.238351  439718 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 04:22:07.238467  439718 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 04:22:07.290925  439718 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-09-04 04:22:07.281159829 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 04:22:07.291031  439718 docker.go:318] overlay module found
	I0904 04:22:07.292862  439718 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0904 04:22:07.293945  439718 start.go:304] selected driver: docker
	I0904 04:22:07.293961  439718 start.go:918] validating driver "docker" against &{Name:functional-874981 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-874981 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 04:22:07.294058  439718 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 04:22:07.296021  439718 out.go:203] 
	W0904 04:22:07.297046  439718 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0904 04:22:07.297957  439718 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-874981 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-874981 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-xcp97" [338484e0-7cc8-4344-9a68-1ca5bae09f7c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-xcp97" [338484e0-7cc8-4344-9a68-1ca5bae09f7c] Running
I0904 04:21:50.424009  389671 detect.go:223] nested VM detected
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003392788s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30377
functional_test.go:1680: http://192.168.49.2:30377: success! body:
Request served by hello-node-connect-7d85dfc575-xcp97

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30377
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (30.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [2a32d882-1267-468f-b018-f50826a3a487] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003155571s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-874981 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-874981 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-874981 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-874981 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [acd9cfaa-7d2a-4a73-8350-da459ec328e5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [acd9cfaa-7d2a-4a73-8350-da459ec328e5] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003122324s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-874981 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-874981 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-874981 delete -f testdata/storage-provisioner/pod.yaml: (2.764163803s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-874981 apply -f testdata/storage-provisioner/pod.yaml
I0904 04:22:05.492038  389671 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [53922800-161f-4c27-936f-11b4e2e15777] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [53922800-161f-4c27-936f-11b4e2e15777] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003401301s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-874981 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (30.55s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh -n functional-874981 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 cp functional-874981:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd180657207/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh -n functional-874981 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh -n functional-874981 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-874981 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-g87d7" [aaa6a6de-fac4-445e-a54c-4b1237e8da18] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-g87d7" [aaa6a6de-fac4-445e-a54c-4b1237e8da18] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.007653463s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-874981 exec mysql-5bb876957f-g87d7 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-874981 exec mysql-5bb876957f-g87d7 -- mysql -ppassword -e "show databases;": exit status 1 (113.798302ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0904 04:22:12.720526  389671 retry.go:31] will retry after 1.379712495s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-874981 exec mysql-5bb876957f-g87d7 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-874981 exec mysql-5bb876957f-g87d7 -- mysql -ppassword -e "show databases;": exit status 1 (110.076551ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0904 04:22:14.210689  389671 retry.go:31] will retry after 1.765279646s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-874981 exec mysql-5bb876957f-g87d7 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-874981 exec mysql-5bb876957f-g87d7 -- mysql -ppassword -e "show databases;": exit status 1 (145.831375ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1812: (dbg) Run:  kubectl --context functional-874981 exec mysql-5bb876957f-g87d7 -- mysql -ppassword -e "show databases;"
2025/09/04 04:22:20 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (21.73s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/389671/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh "sudo cat /etc/test/nested/copy/389671/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/389671.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh "sudo cat /etc/ssl/certs/389671.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/389671.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh "sudo cat /usr/share/ca-certificates/389671.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3896712.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh "sudo cat /etc/ssl/certs/3896712.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3896712.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh "sudo cat /usr/share/ca-certificates/3896712.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-874981 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-874981 ssh "sudo systemctl is-active docker": exit status 1 (273.952663ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-874981 ssh "sudo systemctl is-active crio": exit status 1 (264.83304ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-874981 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-874981 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-g9kxf" [3d09d7d7-9cb0-4c00-87d6-e72ef6e231cb] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-g9kxf" [3d09d7d7-9cb0-4c00-87d6-e72ef6e231cb] Running
E0904 04:21:47.236686  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/addons-919243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003456291s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-874981 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-874981 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-874981 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-874981 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 435138: os: process already finished
helpers_test.go:519: unable to terminate pid 434788: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-874981 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-874981 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [c4de5f2c-849a-45fa-ba31-237e9aa2e19f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [c4de5f2c-849a-45fa-ba31-237e9aa2e19f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.003253019s
I0904 04:21:55.903110  389671 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 service list -o json
functional_test.go:1504: Took "471.452329ms" to run "out/minikube-linux-amd64 -p functional-874981 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:32316
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32316
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "325.673048ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "50.013089ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "306.717126ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "68.147286ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-874981 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.82.199 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-874981 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (16.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-874981 /tmp/TestFunctionalparallelMountCmdany-port2188691/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1756959716065983030" to /tmp/TestFunctionalparallelMountCmdany-port2188691/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1756959716065983030" to /tmp/TestFunctionalparallelMountCmdany-port2188691/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1756959716065983030" to /tmp/TestFunctionalparallelMountCmdany-port2188691/001/test-1756959716065983030
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-874981 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (279.743166ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0904 04:21:56.346032  389671 retry.go:31] will retry after 572.042881ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  4 04:21 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  4 04:21 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  4 04:21 test-1756959716065983030
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh cat /mount-9p/test-1756959716065983030
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-874981 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [57039667-7906-4bc1-ab10-5f6380fa096d] Pending
helpers_test.go:352: "busybox-mount" [57039667-7906-4bc1-ab10-5f6380fa096d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [57039667-7906-4bc1-ab10-5f6380fa096d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [57039667-7906-4bc1-ab10-5f6380fa096d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 14.004367545s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-874981 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-874981 /tmp/TestFunctionalparallelMountCmdany-port2188691/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (16.80s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
I0904 04:22:16.123038  389671 retry.go:31] will retry after 1.939164644s: exit status 1
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-874981 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-874981
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-874981
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-874981 image ls --format short --alsologtostderr:
I0904 04:22:16.337292  442324 out.go:360] Setting OutFile to fd 1 ...
I0904 04:22:16.337669  442324 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 04:22:16.337720  442324 out.go:374] Setting ErrFile to fd 2...
I0904 04:22:16.337739  442324 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 04:22:16.338061  442324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-385918/.minikube/bin
I0904 04:22:16.338944  442324 config.go:182] Loaded profile config "functional-874981": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0904 04:22:16.339138  442324 config.go:182] Loaded profile config "functional-874981": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0904 04:22:16.339736  442324 cli_runner.go:164] Run: docker container inspect functional-874981 --format={{.State.Status}}
I0904 04:22:16.364613  442324 ssh_runner.go:195] Run: systemctl --version
I0904 04:22:16.364686  442324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-874981
I0904 04:22:16.384055  442324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21409-385918/.minikube/machines/functional-874981/id_rsa Username:docker}
I0904 04:22:16.483488  442324 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-874981 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:5f1f52 │ 74.3MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.0            │ sha256:90550c │ 27.1MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.0            │ sha256:a0af72 │ 22.8MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.0            │ sha256:46169d │ 17.4MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ docker.io/library/minikube-local-cache-test │ functional-874981  │ sha256:47ed9f │ 991B   │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.0            │ sha256:df0860 │ 26MB   │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ docker.io/kicbase/echo-server               │ functional-874981  │ sha256:9056ab │ 2.37MB │
│ docker.io/library/nginx                     │ latest             │ sha256:ad5708 │ 72.3MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:52546a │ 22.4MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ docker.io/library/mysql                     │ 5.7                │ sha256:510733 │ 138MB  │
│ docker.io/library/nginx                     │ alpine             │ sha256:4a8601 │ 22.5MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-874981 image ls --format table --alsologtostderr:
I0904 04:22:17.157114  442610 out.go:360] Setting OutFile to fd 1 ...
I0904 04:22:17.157227  442610 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 04:22:17.157234  442610 out.go:374] Setting ErrFile to fd 2...
I0904 04:22:17.157240  442610 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 04:22:17.157538  442610 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-385918/.minikube/bin
I0904 04:22:17.158228  442610 config.go:182] Loaded profile config "functional-874981": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0904 04:22:17.158339  442610 config.go:182] Loaded profile config "functional-874981": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0904 04:22:17.158706  442610 cli_runner.go:164] Run: docker container inspect functional-874981 --format={{.State.Status}}
I0904 04:22:17.177056  442610 ssh_runner.go:195] Run: systemctl --version
I0904 04:22:17.177102  442610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-874981
I0904 04:22:17.196594  442610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21409-385918/.minikube/machines/functional-874981/id_rsa Username:docker}
I0904 04:22:17.288244  442610 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-874981 image ls --format json --alsologtostderr:
[{"id":"sha256:47ed9f424ad19a16371db94641bb372541c0bfc7c5e013c7b613047cdbf1a618","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-874981"],"size":"991"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083
ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d
31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"22384805"},{"id":"sha256:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":["registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"27066504"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-874981"],"size":"2372971"},{"id":"sha256:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9","repoDigests":["docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8"],"repoTags":["docker.io/library/nginx:alpine"],"size":"22
477192"},{"id":"sha256:ad5708199ec7d169c6837fe46e1646603d0f7d0a0f54d3cd8d07bc1c818d0224","repoDigests":["docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57"],"repoTags":["docker.io/library/nginx:latest"],"size":"72324501"},{"id":"sha256:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"22819719"},{"id":"sha256:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"17385558"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:5f1f5298c888daa46c4
409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"74311308"},{"id":"sha256:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"25963701"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-874981 image ls --format json --alsologtostderr:
I0904 04:22:17.052033  442563 out.go:360] Setting OutFile to fd 1 ...
I0904 04:22:17.052517  442563 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 04:22:17.052575  442563 out.go:374] Setting ErrFile to fd 2...
I0904 04:22:17.052594  442563 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 04:22:17.053089  442563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-385918/.minikube/bin
I0904 04:22:17.054099  442563 config.go:182] Loaded profile config "functional-874981": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0904 04:22:17.054232  442563 config.go:182] Loaded profile config "functional-874981": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0904 04:22:17.054625  442563 cli_runner.go:164] Run: docker container inspect functional-874981 --format={{.State.Status}}
I0904 04:22:17.071561  442563 ssh_runner.go:195] Run: systemctl --version
I0904 04:22:17.071618  442563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-874981
I0904 04:22:17.090270  442563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21409-385918/.minikube/machines/functional-874981/id_rsa Username:docker}
I0904 04:22:17.188568  442563 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-874981 image ls --format yaml --alsologtostderr:
- id: sha256:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "27066504"
- id: sha256:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "22819719"
- id: sha256:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "25963701"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9
repoDigests:
- docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8
repoTags:
- docker.io/library/nginx:alpine
size: "22477192"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-874981
size: "2372971"
- id: sha256:47ed9f424ad19a16371db94641bb372541c0bfc7c5e013c7b613047cdbf1a618
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-874981
size: "991"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"
- id: sha256:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "17385558"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:ad5708199ec7d169c6837fe46e1646603d0f7d0a0f54d3cd8d07bc1c818d0224
repoDigests:
- docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57
repoTags:
- docker.io/library/nginx:latest
size: "72324501"
- id: sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "74311308"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-874981 image ls --format yaml --alsologtostderr:
I0904 04:22:16.662085  442444 out.go:360] Setting OutFile to fd 1 ...
I0904 04:22:16.662221  442444 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 04:22:16.662231  442444 out.go:374] Setting ErrFile to fd 2...
I0904 04:22:16.662236  442444 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 04:22:16.662440  442444 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-385918/.minikube/bin
I0904 04:22:16.663069  442444 config.go:182] Loaded profile config "functional-874981": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0904 04:22:16.663182  442444 config.go:182] Loaded profile config "functional-874981": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0904 04:22:16.663547  442444 cli_runner.go:164] Run: docker container inspect functional-874981 --format={{.State.Status}}
I0904 04:22:16.680545  442444 ssh_runner.go:195] Run: systemctl --version
I0904 04:22:16.680595  442444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-874981
I0904 04:22:16.704520  442444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21409-385918/.minikube/machines/functional-874981/id_rsa Username:docker}
I0904 04:22:16.884029  442444 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-874981 ssh pgrep buildkitd: exit status 1 (263.278454ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 image build -t localhost/my-image:functional-874981 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-874981 image build -t localhost/my-image:functional-874981 testdata/build --alsologtostderr: (3.497943808s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-874981 image build -t localhost/my-image:functional-874981 testdata/build --alsologtostderr:
I0904 04:22:17.547193  442761 out.go:360] Setting OutFile to fd 1 ...
I0904 04:22:17.547482  442761 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 04:22:17.547494  442761 out.go:374] Setting ErrFile to fd 2...
I0904 04:22:17.547501  442761 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 04:22:17.547745  442761 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-385918/.minikube/bin
I0904 04:22:17.548483  442761 config.go:182] Loaded profile config "functional-874981": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0904 04:22:17.549285  442761 config.go:182] Loaded profile config "functional-874981": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0904 04:22:17.549764  442761 cli_runner.go:164] Run: docker container inspect functional-874981 --format={{.State.Status}}
I0904 04:22:17.570684  442761 ssh_runner.go:195] Run: systemctl --version
I0904 04:22:17.570754  442761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-874981
I0904 04:22:17.591493  442761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21409-385918/.minikube/machines/functional-874981/id_rsa Username:docker}
I0904 04:22:17.683355  442761 build_images.go:161] Building image from path: /tmp/build.371325101.tar
I0904 04:22:17.683429  442761 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0904 04:22:17.692274  442761 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.371325101.tar
I0904 04:22:17.695386  442761 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.371325101.tar: stat -c "%s %y" /var/lib/minikube/build/build.371325101.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.371325101.tar': No such file or directory
I0904 04:22:17.695424  442761 ssh_runner.go:362] scp /tmp/build.371325101.tar --> /var/lib/minikube/build/build.371325101.tar (3072 bytes)
I0904 04:22:17.717348  442761 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.371325101
I0904 04:22:17.725042  442761 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.371325101 -xf /var/lib/minikube/build/build.371325101.tar
I0904 04:22:17.733914  442761 containerd.go:394] Building image: /var/lib/minikube/build/build.371325101
I0904 04:22:17.733977  442761 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.371325101 --local dockerfile=/var/lib/minikube/build/build.371325101 --output type=image,name=localhost/my-image:functional-874981
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:0a69a79b4994f474b7d8aa5584603f225cf3e033b1cab2e195b1887dd5183e44 done
#8 exporting config sha256:2995097c4d2be3ef62c5220f4303f7ba2f8f338683775fdcbcf5521ca05de14f done
#8 naming to localhost/my-image:functional-874981 done
#8 DONE 0.1s
I0904 04:22:20.975742  442761 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.371325101 --local dockerfile=/var/lib/minikube/build/build.371325101 --output type=image,name=localhost/my-image:functional-874981: (3.24172309s)
I0904 04:22:20.975827  442761 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.371325101
I0904 04:22:20.984516  442761 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.371325101.tar
I0904 04:22:20.992339  442761 build_images.go:217] Built localhost/my-image:functional-874981 from /tmp/build.371325101.tar
I0904 04:22:20.992368  442761 build_images.go:133] succeeded building to: functional-874981
I0904 04:22:20.992374  442761 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.688196951s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-874981
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 image load --daemon kicbase/echo-server:functional-874981 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-874981 image load --daemon kicbase/echo-server:functional-874981 --alsologtostderr: (1.049937546s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 image load --daemon kicbase/echo-server:functional-874981 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-874981
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 image load --daemon kicbase/echo-server:functional-874981 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-874981 image load --daemon kicbase/echo-server:functional-874981 --alsologtostderr: (1.886050855s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 image save kicbase/echo-server:functional-874981 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 image rm kicbase/echo-server:functional-874981 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-874981
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 image save --daemon kicbase/echo-server:functional-874981 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-874981
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-874981 /tmp/TestFunctionalparallelMountCmdspecific-port678836026/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-874981 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (266.382938ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0904 04:22:13.135659  389671 retry.go:31] will retry after 557.826883ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-874981 /tmp/TestFunctionalparallelMountCmdspecific-port678836026/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-874981 ssh "sudo umount -f /mount-9p": exit status 1 (261.425217ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-874981 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-874981 /tmp/TestFunctionalparallelMountCmdspecific-port678836026/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-874981 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4042710784/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-874981 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4042710784/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-874981 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4042710784/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-874981 ssh "findmnt -T" /mount1: exit status 1 (336.423849ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0904 04:22:15.002736  389671 retry.go:31] will retry after 301.716032ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-874981 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-874981 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-874981 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4042710784/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-874981 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4042710784/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-874981 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4042710784/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.60s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-874981
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-874981
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-874981
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (94.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E0904 04:22:28.199042  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/addons-919243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:23:50.121215  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/addons-919243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-895701 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m33.550532424s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (94.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (17.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-895701 kubectl -- rollout status deployment/busybox: (15.139669218s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 kubectl -- exec busybox-7b57f96db7-8gc46 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 kubectl -- exec busybox-7b57f96db7-fmtxm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 kubectl -- exec busybox-7b57f96db7-nthk7 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 kubectl -- exec busybox-7b57f96db7-8gc46 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 kubectl -- exec busybox-7b57f96db7-fmtxm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 kubectl -- exec busybox-7b57f96db7-nthk7 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 kubectl -- exec busybox-7b57f96db7-8gc46 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 kubectl -- exec busybox-7b57f96db7-fmtxm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 kubectl -- exec busybox-7b57f96db7-nthk7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (17.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 kubectl -- exec busybox-7b57f96db7-8gc46 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 kubectl -- exec busybox-7b57f96db7-8gc46 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 kubectl -- exec busybox-7b57f96db7-fmtxm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 kubectl -- exec busybox-7b57f96db7-fmtxm -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 kubectl -- exec busybox-7b57f96db7-nthk7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 kubectl -- exec busybox-7b57f96db7-nthk7 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (12.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-895701 node add --alsologtostderr -v 5: (11.390893945s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (12.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-895701 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 cp testdata/cp-test.txt ha-895701:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 cp ha-895701:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1565031524/001/cp-test_ha-895701.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 cp ha-895701:/home/docker/cp-test.txt ha-895701-m02:/home/docker/cp-test_ha-895701_ha-895701-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701-m02 "sudo cat /home/docker/cp-test_ha-895701_ha-895701-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 cp ha-895701:/home/docker/cp-test.txt ha-895701-m03:/home/docker/cp-test_ha-895701_ha-895701-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701-m03 "sudo cat /home/docker/cp-test_ha-895701_ha-895701-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 cp ha-895701:/home/docker/cp-test.txt ha-895701-m04:/home/docker/cp-test_ha-895701_ha-895701-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701-m04 "sudo cat /home/docker/cp-test_ha-895701_ha-895701-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 cp testdata/cp-test.txt ha-895701-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 cp ha-895701-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1565031524/001/cp-test_ha-895701-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 cp ha-895701-m02:/home/docker/cp-test.txt ha-895701:/home/docker/cp-test_ha-895701-m02_ha-895701.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701 "sudo cat /home/docker/cp-test_ha-895701-m02_ha-895701.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 cp ha-895701-m02:/home/docker/cp-test.txt ha-895701-m03:/home/docker/cp-test_ha-895701-m02_ha-895701-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701-m03 "sudo cat /home/docker/cp-test_ha-895701-m02_ha-895701-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 cp ha-895701-m02:/home/docker/cp-test.txt ha-895701-m04:/home/docker/cp-test_ha-895701-m02_ha-895701-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701-m04 "sudo cat /home/docker/cp-test_ha-895701-m02_ha-895701-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 cp testdata/cp-test.txt ha-895701-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 cp ha-895701-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1565031524/001/cp-test_ha-895701-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 cp ha-895701-m03:/home/docker/cp-test.txt ha-895701:/home/docker/cp-test_ha-895701-m03_ha-895701.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701 "sudo cat /home/docker/cp-test_ha-895701-m03_ha-895701.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 cp ha-895701-m03:/home/docker/cp-test.txt ha-895701-m02:/home/docker/cp-test_ha-895701-m03_ha-895701-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701-m02 "sudo cat /home/docker/cp-test_ha-895701-m03_ha-895701-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 cp ha-895701-m03:/home/docker/cp-test.txt ha-895701-m04:/home/docker/cp-test_ha-895701-m03_ha-895701-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701-m04 "sudo cat /home/docker/cp-test_ha-895701-m03_ha-895701-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 cp testdata/cp-test.txt ha-895701-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 cp ha-895701-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1565031524/001/cp-test_ha-895701-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 cp ha-895701-m04:/home/docker/cp-test.txt ha-895701:/home/docker/cp-test_ha-895701-m04_ha-895701.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701 "sudo cat /home/docker/cp-test_ha-895701-m04_ha-895701.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 cp ha-895701-m04:/home/docker/cp-test.txt ha-895701-m02:/home/docker/cp-test_ha-895701-m04_ha-895701-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701-m02 "sudo cat /home/docker/cp-test_ha-895701-m04_ha-895701-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 cp ha-895701-m04:/home/docker/cp-test.txt ha-895701-m03:/home/docker/cp-test_ha-895701-m04_ha-895701-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 ssh -n ha-895701-m03 "sudo cat /home/docker/cp-test_ha-895701-m04_ha-895701-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-895701 node stop m02 --alsologtostderr -v 5: (11.870512953s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-895701 status --alsologtostderr -v 5: exit status 7 (647.929193ms)

                                                
                                                
-- stdout --
	ha-895701
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-895701-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-895701-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-895701-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 04:24:57.265010  463786 out.go:360] Setting OutFile to fd 1 ...
	I0904 04:24:57.265320  463786 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 04:24:57.265334  463786 out.go:374] Setting ErrFile to fd 2...
	I0904 04:24:57.265338  463786 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 04:24:57.265561  463786 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-385918/.minikube/bin
	I0904 04:24:57.265742  463786 out.go:368] Setting JSON to false
	I0904 04:24:57.265784  463786 mustload.go:65] Loading cluster: ha-895701
	I0904 04:24:57.265895  463786 notify.go:220] Checking for updates...
	I0904 04:24:57.266278  463786 config.go:182] Loaded profile config "ha-895701": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0904 04:24:57.266309  463786 status.go:174] checking status of ha-895701 ...
	I0904 04:24:57.266975  463786 cli_runner.go:164] Run: docker container inspect ha-895701 --format={{.State.Status}}
	I0904 04:24:57.287888  463786 status.go:371] ha-895701 host status = "Running" (err=<nil>)
	I0904 04:24:57.287951  463786 host.go:66] Checking if "ha-895701" exists ...
	I0904 04:24:57.288188  463786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-895701
	I0904 04:24:57.306120  463786 host.go:66] Checking if "ha-895701" exists ...
	I0904 04:24:57.306414  463786 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 04:24:57.306473  463786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-895701
	I0904 04:24:57.324177  463786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21409-385918/.minikube/machines/ha-895701/id_rsa Username:docker}
	I0904 04:24:57.412231  463786 ssh_runner.go:195] Run: systemctl --version
	I0904 04:24:57.416963  463786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 04:24:57.427395  463786 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 04:24:57.478672  463786 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-04 04:24:57.469600191 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 04:24:57.479615  463786 kubeconfig.go:125] found "ha-895701" server: "https://192.168.49.254:8443"
	I0904 04:24:57.479670  463786 api_server.go:166] Checking apiserver status ...
	I0904 04:24:57.479727  463786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 04:24:57.490582  463786 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1620/cgroup
	I0904 04:24:57.500186  463786 api_server.go:182] apiserver freezer: "6:freezer:/docker/e18d488be2719c10a218c0465d7d56a8c1bdfbf544b7962c6c48fb0fbb92fa2c/kubepods/burstable/pod2a239872fe5722d10079b4c7598730c6/c1a14b1a3bd6b50a8f54e73fc0aa9cd611d032bca6295a02fa5c9554dd131c63"
	I0904 04:24:57.500252  463786 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e18d488be2719c10a218c0465d7d56a8c1bdfbf544b7962c6c48fb0fbb92fa2c/kubepods/burstable/pod2a239872fe5722d10079b4c7598730c6/c1a14b1a3bd6b50a8f54e73fc0aa9cd611d032bca6295a02fa5c9554dd131c63/freezer.state
	I0904 04:24:57.508249  463786 api_server.go:204] freezer state: "THAWED"
	I0904 04:24:57.508280  463786 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0904 04:24:57.514056  463786 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0904 04:24:57.514078  463786 status.go:463] ha-895701 apiserver status = Running (err=<nil>)
	I0904 04:24:57.514089  463786 status.go:176] ha-895701 status: &{Name:ha-895701 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 04:24:57.514103  463786 status.go:174] checking status of ha-895701-m02 ...
	I0904 04:24:57.514364  463786 cli_runner.go:164] Run: docker container inspect ha-895701-m02 --format={{.State.Status}}
	I0904 04:24:57.532703  463786 status.go:371] ha-895701-m02 host status = "Stopped" (err=<nil>)
	I0904 04:24:57.532726  463786 status.go:384] host is not running, skipping remaining checks
	I0904 04:24:57.532733  463786 status.go:176] ha-895701-m02 status: &{Name:ha-895701-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 04:24:57.532752  463786 status.go:174] checking status of ha-895701-m03 ...
	I0904 04:24:57.532988  463786 cli_runner.go:164] Run: docker container inspect ha-895701-m03 --format={{.State.Status}}
	I0904 04:24:57.549666  463786 status.go:371] ha-895701-m03 host status = "Running" (err=<nil>)
	I0904 04:24:57.549707  463786 host.go:66] Checking if "ha-895701-m03" exists ...
	I0904 04:24:57.550015  463786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-895701-m03
	I0904 04:24:57.568362  463786 host.go:66] Checking if "ha-895701-m03" exists ...
	I0904 04:24:57.568631  463786 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 04:24:57.568677  463786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-895701-m03
	I0904 04:24:57.586789  463786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21409-385918/.minikube/machines/ha-895701-m03/id_rsa Username:docker}
	I0904 04:24:57.672052  463786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 04:24:57.682966  463786 kubeconfig.go:125] found "ha-895701" server: "https://192.168.49.254:8443"
	I0904 04:24:57.682995  463786 api_server.go:166] Checking apiserver status ...
	I0904 04:24:57.683038  463786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 04:24:57.693060  463786 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1537/cgroup
	I0904 04:24:57.701525  463786 api_server.go:182] apiserver freezer: "6:freezer:/docker/eb7b1c35de04853902174ca13961aaf09afb2dba2c19d8a716c283b270ae03fe/kubepods/burstable/pod1348af2bbac56470f897e4434f6d0544/af28a87129fd5b6479346ae1ade30dc8822acc40048962943930eaef1cb9039d"
	I0904 04:24:57.701579  463786 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/eb7b1c35de04853902174ca13961aaf09afb2dba2c19d8a716c283b270ae03fe/kubepods/burstable/pod1348af2bbac56470f897e4434f6d0544/af28a87129fd5b6479346ae1ade30dc8822acc40048962943930eaef1cb9039d/freezer.state
	I0904 04:24:57.709297  463786 api_server.go:204] freezer state: "THAWED"
	I0904 04:24:57.709330  463786 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0904 04:24:57.714136  463786 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0904 04:24:57.714165  463786 status.go:463] ha-895701-m03 apiserver status = Running (err=<nil>)
	I0904 04:24:57.714176  463786 status.go:176] ha-895701-m03 status: &{Name:ha-895701-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 04:24:57.714198  463786 status.go:174] checking status of ha-895701-m04 ...
	I0904 04:24:57.714518  463786 cli_runner.go:164] Run: docker container inspect ha-895701-m04 --format={{.State.Status}}
	I0904 04:24:57.732345  463786 status.go:371] ha-895701-m04 host status = "Running" (err=<nil>)
	I0904 04:24:57.732366  463786 host.go:66] Checking if "ha-895701-m04" exists ...
	I0904 04:24:57.732639  463786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-895701-m04
	I0904 04:24:57.750440  463786 host.go:66] Checking if "ha-895701-m04" exists ...
	I0904 04:24:57.750702  463786 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 04:24:57.750739  463786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-895701-m04
	I0904 04:24:57.767662  463786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21409-385918/.minikube/machines/ha-895701-m04/id_rsa Username:docker}
	I0904 04:24:57.851883  463786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 04:24:57.862353  463786 status.go:176] ha-895701-m04 status: &{Name:ha-895701-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (9.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-895701 node start m02 --alsologtostderr -v 5: (8.567730922s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (9.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (92.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-895701 stop --alsologtostderr -v 5: (36.764716536s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 start --wait true --alsologtostderr -v 5
E0904 04:26:06.255094  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/addons-919243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:26:33.963360  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/addons-919243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-895701 start --wait true --alsologtostderr -v 5: (55.358022491s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (92.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 node delete m03 --alsologtostderr -v 5
E0904 04:26:43.844122  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/functional-874981/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:26:43.850601  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/functional-874981/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:26:43.862020  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/functional-874981/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:26:43.883484  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/functional-874981/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:26:43.924923  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/functional-874981/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:26:44.006484  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/functional-874981/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:26:44.168068  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/functional-874981/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:26:44.489749  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/functional-874981/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:26:45.132041  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/functional-874981/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:26:46.414063  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/functional-874981/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:26:48.976192  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/functional-874981/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-895701 node delete m03 --alsologtostderr -v 5: (8.301029422s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 stop --alsologtostderr -v 5
E0904 04:26:54.098483  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/functional-874981/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:27:04.340505  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/functional-874981/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:27:24.822032  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/functional-874981/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-895701 stop --alsologtostderr -v 5: (35.496399433s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-895701 status --alsologtostderr -v 5: exit status 7 (101.956374ms)

                                                
                                                
-- stdout --
	ha-895701
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-895701-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-895701-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 04:27:26.226060  480770 out.go:360] Setting OutFile to fd 1 ...
	I0904 04:27:26.226296  480770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 04:27:26.226304  480770 out.go:374] Setting ErrFile to fd 2...
	I0904 04:27:26.226308  480770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 04:27:26.226492  480770 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-385918/.minikube/bin
	I0904 04:27:26.226648  480770 out.go:368] Setting JSON to false
	I0904 04:27:26.226687  480770 mustload.go:65] Loading cluster: ha-895701
	I0904 04:27:26.226764  480770 notify.go:220] Checking for updates...
	I0904 04:27:26.227076  480770 config.go:182] Loaded profile config "ha-895701": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0904 04:27:26.227099  480770 status.go:174] checking status of ha-895701 ...
	I0904 04:27:26.227527  480770 cli_runner.go:164] Run: docker container inspect ha-895701 --format={{.State.Status}}
	I0904 04:27:26.244625  480770 status.go:371] ha-895701 host status = "Stopped" (err=<nil>)
	I0904 04:27:26.244646  480770 status.go:384] host is not running, skipping remaining checks
	I0904 04:27:26.244653  480770 status.go:176] ha-895701 status: &{Name:ha-895701 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 04:27:26.244678  480770 status.go:174] checking status of ha-895701-m02 ...
	I0904 04:27:26.244918  480770 cli_runner.go:164] Run: docker container inspect ha-895701-m02 --format={{.State.Status}}
	I0904 04:27:26.261083  480770 status.go:371] ha-895701-m02 host status = "Stopped" (err=<nil>)
	I0904 04:27:26.261105  480770 status.go:384] host is not running, skipping remaining checks
	I0904 04:27:26.261113  480770 status.go:176] ha-895701-m02 status: &{Name:ha-895701-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 04:27:26.261138  480770 status.go:174] checking status of ha-895701-m04 ...
	I0904 04:27:26.261442  480770 cli_runner.go:164] Run: docker container inspect ha-895701-m04 --format={{.State.Status}}
	I0904 04:27:26.278311  480770 status.go:371] ha-895701-m04 host status = "Stopped" (err=<nil>)
	I0904 04:27:26.278353  480770 status.go:384] host is not running, skipping remaining checks
	I0904 04:27:26.278364  480770 status.go:176] ha-895701-m04 status: &{Name:ha-895701-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (56.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E0904 04:28:05.784128  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/functional-874981/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-895701 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (56.080455837s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (56.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (32.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-895701 node add --control-plane --alsologtostderr -v 5: (31.923043766s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-895701 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (32.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (56.91s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-696681 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
E0904 04:29:27.706072  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/functional-874981/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-696681 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (56.9129372s)
--- PASS: TestJSONOutput/start/Command (56.91s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-696681 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-696681 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.71s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-696681 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-696681 --output=json --user=testUser: (5.712207934s)
--- PASS: TestJSONOutput/stop/Command (5.71s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-581324 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-581324 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (64.09079ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2d4b9986-7b33-43df-bb46-156db9d58c6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-581324] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d58e7681-ab9d-4448-a738-2ccc646f8876","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21409"}}
	{"specversion":"1.0","id":"86a95723-f68d-4967-8f6a-c6e217ce93b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cbafd2a4-1ee0-470f-8a8f-c9ae2d3e6cab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21409-385918/kubeconfig"}}
	{"specversion":"1.0","id":"42066f5e-b7f9-41ed-82af-e932268013ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-385918/.minikube"}}
	{"specversion":"1.0","id":"c69f83ba-52fd-4ec0-81bc-8addcb9fc42b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f0f84307-a673-4232-8cb6-9eb578bcbc52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9d0e2878-145d-4750-956a-a52833b314d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-581324" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-581324
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.41s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-318141 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-318141 --network=: (33.364246583s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-318141" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-318141
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-318141: (2.025634689s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.41s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.18s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-505450 --network=bridge
E0904 04:31:06.255293  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/addons-919243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-505450 --network=bridge: (22.263189949s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-505450" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-505450
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-505450: (1.894612835s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.18s)

                                                
                                    
x
+
TestKicExistingNetwork (24.89s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0904 04:31:13.101896  389671 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0904 04:31:13.119262  389671 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0904 04:31:13.119337  389671 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0904 04:31:13.119358  389671 cli_runner.go:164] Run: docker network inspect existing-network
W0904 04:31:13.136940  389671 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0904 04:31:13.136976  389671 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0904 04:31:13.136993  389671 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0904 04:31:13.137164  389671 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0904 04:31:13.154188  389671 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5b5e0e458f53 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:83:f5:75:11:2a} reservation:<nil>}
I0904 04:31:13.154748  389671 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f99270}
I0904 04:31:13.154783  389671 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0904 04:31:13.154855  389671 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0904 04:31:13.206407  389671 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-432854 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-432854 --network=existing-network: (22.858610707s)
helpers_test.go:175: Cleaning up "existing-network-432854" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-432854
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-432854: (1.890896294s)
I0904 04:31:37.974398  389671 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.89s)

                                                
                                    
x
+
TestKicCustomSubnet (24.84s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-450757 --subnet=192.168.60.0/24
E0904 04:31:43.847137  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/functional-874981/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-450757 --subnet=192.168.60.0/24: (22.774869286s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-450757 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-450757" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-450757
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-450757: (2.046873771s)
--- PASS: TestKicCustomSubnet (24.84s)

                                                
                                    
x
+
TestKicStaticIP (27.12s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-203719 --static-ip=192.168.200.200
E0904 04:32:11.550990  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/functional-874981/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-203719 --static-ip=192.168.200.200: (24.922504388s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-203719 ip
helpers_test.go:175: Cleaning up "static-ip-203719" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-203719
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-203719: (2.070518863s)
--- PASS: TestKicStaticIP (27.12s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (53.36s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-556499 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-556499 --driver=docker  --container-runtime=containerd: (25.032434951s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-571504 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-571504 --driver=docker  --container-runtime=containerd: (22.924195359s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-556499
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-571504
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-571504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-571504
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-571504: (2.144382667s)
helpers_test.go:175: Cleaning up "first-556499" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-556499
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-556499: (2.144084613s)
--- PASS: TestMinikubeProfile (53.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.41s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-473115 --memory=3072 --mount-string /tmp/TestMountStartserial2266501413/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-473115 --memory=3072 --mount-string /tmp/TestMountStartserial2266501413/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.404809938s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-473115 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.68s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-492941 --memory=3072 --mount-string /tmp/TestMountStartserial2266501413/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-492941 --memory=3072 --mount-string /tmp/TestMountStartserial2266501413/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.682726989s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-492941 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-473115 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-473115 --alsologtostderr -v=5: (1.580452011s)
--- PASS: TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-492941 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-492941
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-492941: (1.173466117s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.17s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-492941
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-492941: (6.171088429s)
--- PASS: TestMountStart/serial/RestartStopped (7.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-492941 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (60.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-200987 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-200987 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m0.236309415s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (60.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (18.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200987 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200987 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-200987 -- rollout status deployment/busybox: (17.489923597s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200987 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200987 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200987 -- exec busybox-7b57f96db7-7bzxl -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200987 -- exec busybox-7b57f96db7-h6whg -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200987 -- exec busybox-7b57f96db7-7bzxl -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200987 -- exec busybox-7b57f96db7-h6whg -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200987 -- exec busybox-7b57f96db7-7bzxl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200987 -- exec busybox-7b57f96db7-h6whg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (18.89s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200987 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200987 -- exec busybox-7b57f96db7-7bzxl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200987 -- exec busybox-7b57f96db7-7bzxl -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200987 -- exec busybox-7b57f96db7-h6whg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200987 -- exec busybox-7b57f96db7-h6whg -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (10.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-200987 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-200987 -v=5 --alsologtostderr: (9.910544289s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (10.50s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-200987 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.64s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 cp testdata/cp-test.txt multinode-200987:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 ssh -n multinode-200987 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 cp multinode-200987:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile570091047/001/cp-test_multinode-200987.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 ssh -n multinode-200987 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 cp multinode-200987:/home/docker/cp-test.txt multinode-200987-m02:/home/docker/cp-test_multinode-200987_multinode-200987-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 ssh -n multinode-200987 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 ssh -n multinode-200987-m02 "sudo cat /home/docker/cp-test_multinode-200987_multinode-200987-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 cp multinode-200987:/home/docker/cp-test.txt multinode-200987-m03:/home/docker/cp-test_multinode-200987_multinode-200987-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 ssh -n multinode-200987 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 ssh -n multinode-200987-m03 "sudo cat /home/docker/cp-test_multinode-200987_multinode-200987-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 cp testdata/cp-test.txt multinode-200987-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 ssh -n multinode-200987-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 cp multinode-200987-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile570091047/001/cp-test_multinode-200987-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 ssh -n multinode-200987-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 cp multinode-200987-m02:/home/docker/cp-test.txt multinode-200987:/home/docker/cp-test_multinode-200987-m02_multinode-200987.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 ssh -n multinode-200987-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 ssh -n multinode-200987 "sudo cat /home/docker/cp-test_multinode-200987-m02_multinode-200987.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 cp multinode-200987-m02:/home/docker/cp-test.txt multinode-200987-m03:/home/docker/cp-test_multinode-200987-m02_multinode-200987-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 ssh -n multinode-200987-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 ssh -n multinode-200987-m03 "sudo cat /home/docker/cp-test_multinode-200987-m02_multinode-200987-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 cp testdata/cp-test.txt multinode-200987-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 ssh -n multinode-200987-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 cp multinode-200987-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile570091047/001/cp-test_multinode-200987-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 ssh -n multinode-200987-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 cp multinode-200987-m03:/home/docker/cp-test.txt multinode-200987:/home/docker/cp-test_multinode-200987-m03_multinode-200987.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 ssh -n multinode-200987-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 ssh -n multinode-200987 "sudo cat /home/docker/cp-test_multinode-200987-m03_multinode-200987.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 cp multinode-200987-m03:/home/docker/cp-test.txt multinode-200987-m02:/home/docker/cp-test_multinode-200987-m03_multinode-200987-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 ssh -n multinode-200987-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 ssh -n multinode-200987-m02 "sudo cat /home/docker/cp-test_multinode-200987-m03_multinode-200987-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.90s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-200987 node stop m03: (1.172965507s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-200987 status: exit status 7 (449.930399ms)

                                                
                                                
-- stdout --
	multinode-200987
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-200987-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-200987-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-200987 status --alsologtostderr: exit status 7 (443.043005ms)

                                                
                                                
-- stdout --
	multinode-200987
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-200987-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-200987-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 04:35:29.090519  545287 out.go:360] Setting OutFile to fd 1 ...
	I0904 04:35:29.090628  545287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 04:35:29.090638  545287 out.go:374] Setting ErrFile to fd 2...
	I0904 04:35:29.090642  545287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 04:35:29.090864  545287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-385918/.minikube/bin
	I0904 04:35:29.091084  545287 out.go:368] Setting JSON to false
	I0904 04:35:29.091135  545287 mustload.go:65] Loading cluster: multinode-200987
	I0904 04:35:29.091212  545287 notify.go:220] Checking for updates...
	I0904 04:35:29.091691  545287 config.go:182] Loaded profile config "multinode-200987": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0904 04:35:29.091718  545287 status.go:174] checking status of multinode-200987 ...
	I0904 04:35:29.092189  545287 cli_runner.go:164] Run: docker container inspect multinode-200987 --format={{.State.Status}}
	I0904 04:35:29.109823  545287 status.go:371] multinode-200987 host status = "Running" (err=<nil>)
	I0904 04:35:29.109866  545287 host.go:66] Checking if "multinode-200987" exists ...
	I0904 04:35:29.110202  545287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-200987
	I0904 04:35:29.127140  545287 host.go:66] Checking if "multinode-200987" exists ...
	I0904 04:35:29.127475  545287 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 04:35:29.127521  545287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-200987
	I0904 04:35:29.145556  545287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33283 SSHKeyPath:/home/jenkins/minikube-integration/21409-385918/.minikube/machines/multinode-200987/id_rsa Username:docker}
	I0904 04:35:29.228038  545287 ssh_runner.go:195] Run: systemctl --version
	I0904 04:35:29.232031  545287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 04:35:29.242612  545287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 04:35:29.289638  545287 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:63 SystemTime:2025-09-04 04:35:29.280172511 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 04:35:29.290203  545287 kubeconfig.go:125] found "multinode-200987" server: "https://192.168.67.2:8443"
	I0904 04:35:29.290235  545287 api_server.go:166] Checking apiserver status ...
	I0904 04:35:29.290270  545287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 04:35:29.300698  545287 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1579/cgroup
	I0904 04:35:29.308869  545287 api_server.go:182] apiserver freezer: "6:freezer:/docker/b8bca70c1997cf105e349823cf54052fdbe167de4c4d98a6f54d4bffc565fe01/kubepods/burstable/pod949893ea4fb18bd17e38e4c5dca2d303/715e084281553460a20ab3f5ee5568982a558dd55f1de54fd461873ac7372e42"
	I0904 04:35:29.308936  545287 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b8bca70c1997cf105e349823cf54052fdbe167de4c4d98a6f54d4bffc565fe01/kubepods/burstable/pod949893ea4fb18bd17e38e4c5dca2d303/715e084281553460a20ab3f5ee5568982a558dd55f1de54fd461873ac7372e42/freezer.state
	I0904 04:35:29.316422  545287 api_server.go:204] freezer state: "THAWED"
	I0904 04:35:29.316448  545287 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0904 04:35:29.320552  545287 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0904 04:35:29.320576  545287 status.go:463] multinode-200987 apiserver status = Running (err=<nil>)
	I0904 04:35:29.320589  545287 status.go:176] multinode-200987 status: &{Name:multinode-200987 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 04:35:29.320608  545287 status.go:174] checking status of multinode-200987-m02 ...
	I0904 04:35:29.320886  545287 cli_runner.go:164] Run: docker container inspect multinode-200987-m02 --format={{.State.Status}}
	I0904 04:35:29.338583  545287 status.go:371] multinode-200987-m02 host status = "Running" (err=<nil>)
	I0904 04:35:29.338609  545287 host.go:66] Checking if "multinode-200987-m02" exists ...
	I0904 04:35:29.338905  545287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-200987-m02
	I0904 04:35:29.356447  545287 host.go:66] Checking if "multinode-200987-m02" exists ...
	I0904 04:35:29.356737  545287 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 04:35:29.356788  545287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-200987-m02
	I0904 04:35:29.372973  545287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33288 SSHKeyPath:/home/jenkins/minikube-integration/21409-385918/.minikube/machines/multinode-200987-m02/id_rsa Username:docker}
	I0904 04:35:29.455898  545287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 04:35:29.466149  545287 status.go:176] multinode-200987-m02 status: &{Name:multinode-200987-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0904 04:35:29.466182  545287 status.go:174] checking status of multinode-200987-m03 ...
	I0904 04:35:29.466479  545287 cli_runner.go:164] Run: docker container inspect multinode-200987-m03 --format={{.State.Status}}
	I0904 04:35:29.483580  545287 status.go:371] multinode-200987-m03 host status = "Stopped" (err=<nil>)
	I0904 04:35:29.483603  545287 status.go:384] host is not running, skipping remaining checks
	I0904 04:35:29.483613  545287 status.go:176] multinode-200987-m03 status: &{Name:multinode-200987-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.07s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (6.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-200987 node start m03 -v=5 --alsologtostderr: (5.958748743s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (6.60s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (77.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-200987
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-200987
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-200987: (24.748314751s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-200987 --wait=true -v=5 --alsologtostderr
E0904 04:36:06.254585  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/addons-919243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:36:43.844128  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/functional-874981/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-200987 --wait=true -v=5 --alsologtostderr: (52.356060299s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-200987
--- PASS: TestMultiNode/serial/RestartKeepsNodes (77.21s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-200987 node delete m03: (4.529429016s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.08s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-200987 stop: (23.628487499s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-200987 status: exit status 7 (90.797623ms)

                                                
                                                
-- stdout --
	multinode-200987
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-200987-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-200987 status --alsologtostderr: exit status 7 (86.49463ms)

                                                
                                                
-- stdout --
	multinode-200987
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-200987-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 04:37:22.135544  555530 out.go:360] Setting OutFile to fd 1 ...
	I0904 04:37:22.135824  555530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 04:37:22.135834  555530 out.go:374] Setting ErrFile to fd 2...
	I0904 04:37:22.135838  555530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 04:37:22.136057  555530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-385918/.minikube/bin
	I0904 04:37:22.136224  555530 out.go:368] Setting JSON to false
	I0904 04:37:22.136266  555530 mustload.go:65] Loading cluster: multinode-200987
	I0904 04:37:22.136361  555530 notify.go:220] Checking for updates...
	I0904 04:37:22.136653  555530 config.go:182] Loaded profile config "multinode-200987": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0904 04:37:22.136679  555530 status.go:174] checking status of multinode-200987 ...
	I0904 04:37:22.137093  555530 cli_runner.go:164] Run: docker container inspect multinode-200987 --format={{.State.Status}}
	I0904 04:37:22.155272  555530 status.go:371] multinode-200987 host status = "Stopped" (err=<nil>)
	I0904 04:37:22.155300  555530 status.go:384] host is not running, skipping remaining checks
	I0904 04:37:22.155310  555530 status.go:176] multinode-200987 status: &{Name:multinode-200987 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 04:37:22.155338  555530 status.go:174] checking status of multinode-200987-m02 ...
	I0904 04:37:22.155573  555530 cli_runner.go:164] Run: docker container inspect multinode-200987-m02 --format={{.State.Status}}
	I0904 04:37:22.172853  555530 status.go:371] multinode-200987-m02 host status = "Stopped" (err=<nil>)
	I0904 04:37:22.172875  555530 status.go:384] host is not running, skipping remaining checks
	I0904 04:37:22.172887  555530 status.go:176] multinode-200987-m02 status: &{Name:multinode-200987-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (45.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-200987 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E0904 04:37:29.325678  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/addons-919243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-200987 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (45.439888403s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200987 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (45.98s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-200987
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-200987-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-200987-m02 --driver=docker  --container-runtime=containerd: exit status 14 (63.293126ms)

                                                
                                                
-- stdout --
	* [multinode-200987-m02] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-385918/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-385918/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-200987-m02' is duplicated with machine name 'multinode-200987-m02' in profile 'multinode-200987'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-200987-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-200987-m03 --driver=docker  --container-runtime=containerd: (21.78825235s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-200987
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-200987: exit status 80 (265.764332ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-200987 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-200987-m03 already exists in multinode-200987-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-200987-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-200987-m03: (1.817627869s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.98s)

                                                
                                    
x
+
TestPreload (132.31s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-510067 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-510067 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (1m7.788163509s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-510067 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-510067 image pull gcr.io/k8s-minikube/busybox: (2.300301197s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-510067
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-510067: (5.738108421s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-510067 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-510067 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (54.047775402s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-510067 image list
helpers_test.go:175: Cleaning up "test-preload-510067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-510067
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-510067: (2.216282792s)
--- PASS: TestPreload (132.31s)

                                                
                                    
x
+
TestScheduledStopUnix (98.36s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-110911 --memory=3072 --driver=docker  --container-runtime=containerd
E0904 04:41:06.254512  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/addons-919243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-110911 --memory=3072 --driver=docker  --container-runtime=containerd: (21.749283351s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-110911 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-110911 -n scheduled-stop-110911
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-110911 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0904 04:41:10.361470  389671 retry.go:31] will retry after 81.18µs: open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/scheduled-stop-110911/pid: no such file or directory
I0904 04:41:10.362627  389671 retry.go:31] will retry after 90.764µs: open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/scheduled-stop-110911/pid: no such file or directory
I0904 04:41:10.363778  389671 retry.go:31] will retry after 240.526µs: open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/scheduled-stop-110911/pid: no such file or directory
I0904 04:41:10.364889  389671 retry.go:31] will retry after 455.478µs: open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/scheduled-stop-110911/pid: no such file or directory
I0904 04:41:10.366004  389671 retry.go:31] will retry after 519.911µs: open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/scheduled-stop-110911/pid: no such file or directory
I0904 04:41:10.367111  389671 retry.go:31] will retry after 627.293µs: open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/scheduled-stop-110911/pid: no such file or directory
I0904 04:41:10.368238  389671 retry.go:31] will retry after 658.334µs: open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/scheduled-stop-110911/pid: no such file or directory
I0904 04:41:10.369394  389671 retry.go:31] will retry after 1.585013ms: open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/scheduled-stop-110911/pid: no such file or directory
I0904 04:41:10.371614  389671 retry.go:31] will retry after 3.096776ms: open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/scheduled-stop-110911/pid: no such file or directory
I0904 04:41:10.375833  389671 retry.go:31] will retry after 3.9509ms: open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/scheduled-stop-110911/pid: no such file or directory
I0904 04:41:10.380051  389671 retry.go:31] will retry after 7.490984ms: open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/scheduled-stop-110911/pid: no such file or directory
I0904 04:41:10.388263  389671 retry.go:31] will retry after 5.645645ms: open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/scheduled-stop-110911/pid: no such file or directory
I0904 04:41:10.394568  389671 retry.go:31] will retry after 15.726486ms: open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/scheduled-stop-110911/pid: no such file or directory
I0904 04:41:10.410801  389671 retry.go:31] will retry after 17.447429ms: open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/scheduled-stop-110911/pid: no such file or directory
I0904 04:41:10.429082  389671 retry.go:31] will retry after 39.48112ms: open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/scheduled-stop-110911/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-110911 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-110911 -n scheduled-stop-110911
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-110911
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-110911 --schedule 15s
E0904 04:41:43.847008  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/functional-874981/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-110911
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-110911: exit status 7 (70.377916ms)

                                                
                                                
-- stdout --
	scheduled-stop-110911
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-110911 -n scheduled-stop-110911
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-110911 -n scheduled-stop-110911: exit status 7 (66.157155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-110911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-110911
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-110911: (5.28805761s)
--- PASS: TestScheduledStopUnix (98.36s)

                                                
                                    
x
+
TestInsufficientStorage (12.01s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-957028 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-957028 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.711260154s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"52f7b349-5055-42c2-82f0-8852fadf1690","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-957028] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d6041945-7e72-4f52-990b-be205938fbef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21409"}}
	{"specversion":"1.0","id":"17b34b72-3a04-4f10-b4dc-74289e1ad154","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c7b6b361-1a7f-4e53-bd74-fcceecbf3d5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21409-385918/kubeconfig"}}
	{"specversion":"1.0","id":"256aacfa-c2f0-4fe0-af28-6f82f4cfe3be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-385918/.minikube"}}
	{"specversion":"1.0","id":"d720d0d9-2c62-424b-9a7a-739fb14a4dd5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"61a2de0e-af48-45f2-b8c2-61e6d9cc34f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c5b95eaf-8107-4774-a77f-500872a92011","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"322144e3-6afb-4603-8fd7-977fc3ba691f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"94380242-f83b-4e04-b2fd-53b1d9dd00c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2c2ed43b-95f2-4b1e-9134-5244f6da0ce8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"8f1f3fca-8357-4d38-840a-94a5b7804836","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-957028\" primary control-plane node in \"insufficient-storage-957028\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"26755219-5864-440e-9043-9de4c0c5bcf9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.47-1756936034-21409 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"c2feea12-78fc-43c2-aa54-41568b28f8eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"28f6d9af-8d2e-4d0a-973c-1fa829e729dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-957028 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-957028 --output=json --layout=cluster: exit status 7 (259.638978ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-957028","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-957028","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0904 04:42:36.535235  578521 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-957028" does not appear in /home/jenkins/minikube-integration/21409-385918/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-957028 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-957028 --output=json --layout=cluster: exit status 7 (256.417763ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-957028","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-957028","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0904 04:42:36.792128  578619 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-957028" does not appear in /home/jenkins/minikube-integration/21409-385918/kubeconfig
	E0904 04:42:36.801970  578619 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/insufficient-storage-957028/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-957028" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-957028
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-957028: (1.782929283s)
--- PASS: TestInsufficientStorage (12.01s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (50.82s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2346186016 start -p running-upgrade-696408 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2346186016 start -p running-upgrade-696408 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (24.248385351s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-696408 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-696408 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (21.365141177s)
helpers_test.go:175: Cleaning up "running-upgrade-696408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-696408
E0904 04:46:06.254418  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/addons-919243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-696408: (2.240074112s)
--- PASS: TestRunningBinaryUpgrade (50.82s)

                                                
                                    
x
+
TestKubernetesUpgrade (320.07s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-484246 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-484246 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (28.712884973s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-484246
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-484246: (1.210292933s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-484246 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-484246 status --format={{.Host}}: exit status 7 (93.044491ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-484246 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-484246 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m37.605718449s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-484246 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-484246 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-484246 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (78.394485ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-484246] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-385918/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-385918/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-484246
	    minikube start -p kubernetes-upgrade-484246 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4842462 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-484246 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-484246 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-484246 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (9.808910032s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-484246" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-484246
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-484246: (2.479842026s)
--- PASS: TestKubernetesUpgrade (320.07s)

                                                
                                    
x
+
TestMissingContainerUpgrade (140.69s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.713312578 start -p missing-upgrade-956995 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.713312578 start -p missing-upgrade-956995 --memory=3072 --driver=docker  --container-runtime=containerd: (48.101879383s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-956995
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-956995
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-956995 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-956995 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m25.294357049s)
helpers_test.go:175: Cleaning up "missing-upgrade-956995" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-956995
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-956995: (3.670038556s)
--- PASS: TestMissingContainerUpgrade (140.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-318041 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-318041 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (83.606792ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-318041] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-385918/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-385918/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (33.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-318041 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-318041 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (32.742534493s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-318041 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (33.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (7.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-362672 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-362672 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (177.743982ms)

                                                
                                                
-- stdout --
	* [false-362672] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-385918/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-385918/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 04:42:42.730942  580783 out.go:360] Setting OutFile to fd 1 ...
	I0904 04:42:42.731529  580783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 04:42:42.731544  580783 out.go:374] Setting ErrFile to fd 2...
	I0904 04:42:42.731551  580783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 04:42:42.731874  580783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-385918/.minikube/bin
	I0904 04:42:42.732702  580783 out.go:368] Setting JSON to false
	I0904 04:42:42.733925  580783 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8706,"bootTime":1756952257,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 04:42:42.734001  580783 start.go:140] virtualization: kvm guest
	I0904 04:42:42.737730  580783 out.go:179] * [false-362672] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 04:42:42.739112  580783 out.go:179]   - MINIKUBE_LOCATION=21409
	I0904 04:42:42.739117  580783 notify.go:220] Checking for updates...
	I0904 04:42:42.740493  580783 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 04:42:42.741778  580783 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-385918/kubeconfig
	I0904 04:42:42.742972  580783 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-385918/.minikube
	I0904 04:42:42.744102  580783 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 04:42:42.745293  580783 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 04:42:42.746770  580783 config.go:182] Loaded profile config "NoKubernetes-318041": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0904 04:42:42.746899  580783 config.go:182] Loaded profile config "force-systemd-env-396799": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0904 04:42:42.747002  580783 config.go:182] Loaded profile config "offline-containerd-296304": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0904 04:42:42.747110  580783 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 04:42:42.771823  580783 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 04:42:42.772052  580783 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 04:42:42.833940  580783 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:45 OomKillDisable:true NGoroutines:71 SystemTime:2025-09-04 04:42:42.822714352 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 04:42:42.834038  580783 docker.go:318] overlay module found
	I0904 04:42:42.836014  580783 out.go:179] * Using the docker driver based on user configuration
	I0904 04:42:42.837410  580783 start.go:304] selected driver: docker
	I0904 04:42:42.837427  580783 start.go:918] validating driver "docker" against <nil>
	I0904 04:42:42.837440  580783 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 04:42:42.839778  580783 out.go:203] 
	W0904 04:42:42.841061  580783 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0904 04:42:42.842547  580783 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-362672 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-362672

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-362672

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-362672

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-362672

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-362672

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-362672

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-362672

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-362672

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-362672

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-362672

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-362672

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-362672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-362672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-362672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-362672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-362672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-362672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-362672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-362672" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-362672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-362672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-362672" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-362672

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-362672"

                                                
                                                
----------------------- debugLogs end: false-362672 [took: 7.091798711s] --------------------------------
helpers_test.go:175: Cleaning up "false-362672" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-362672
--- PASS: TestNetworkPlugins/group/false (7.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (23.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-318041 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-318041 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (21.072492327s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-318041 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-318041 status -o json: exit status 2 (274.465964ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-318041","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-318041
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-318041: (1.921879289s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (23.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-318041 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-318041 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (5.44475863s)
--- PASS: TestNoKubernetes/serial/Start (5.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-318041 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-318041 "sudo systemctl is-active --quiet service kubelet": exit status 1 (288.073513ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-318041
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-318041: (1.476653962s)
--- PASS: TestNoKubernetes/serial/Stop (1.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-318041 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-318041 --driver=docker  --container-runtime=containerd: (6.794931875s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (86.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1472248502 start -p stopped-upgrade-398584 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1472248502 start -p stopped-upgrade-398584 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (56.146672113s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1472248502 -p stopped-upgrade-398584 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1472248502 -p stopped-upgrade-398584 stop: (1.235953425s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-398584 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-398584 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (29.045849323s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (86.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-318041 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-318041 "sudo systemctl is-active --quiet service kubelet": exit status 1 (274.958014ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-398584
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-398584: (1.049396933s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.05s)

                                                
                                    
x
+
TestPause/serial/Start (88.55s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-182738 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-182738 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m28.54967389s)
--- PASS: TestPause/serial/Start (88.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (52.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-362672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-362672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (52.973537912s)
--- PASS: TestNetworkPlugins/group/auto/Start (52.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (46.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-362672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0904 04:46:43.843819  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/functional-874981/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-362672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (46.815023034s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (46.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-362672 "pgrep -a kubelet"
I0904 04:47:06.945497  389671 config.go:182] Loaded profile config "auto-362672": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-362672 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-d4h9d" [f7cd0e34-4113-42c1-b500-2df1945b252c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-d4h9d" [f7cd0e34-4113-42c1-b500-2df1945b252c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003223616s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-362672 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-362672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-362672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-pr6db" [413d96a0-4d09-4f19-9960-bae1b74e98d7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003445397s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-362672 "pgrep -a kubelet"
I0904 04:47:24.973909  389671 config.go:182] Loaded profile config "kindnet-362672": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-362672 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ck8rc" [01311d2d-953f-4c55-beb1-687e94846f02] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ck8rc" [01311d2d-953f-4c55-beb1-687e94846f02] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003946785s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-362672 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-362672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-362672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (52.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-362672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-362672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (52.296066915s)
--- PASS: TestNetworkPlugins/group/calico/Start (52.30s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.48s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-182738 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-182738 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.468870903s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.48s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-182738 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-182738 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-182738 --output=json --layout=cluster: exit status 2 (325.973624ms)

                                                
                                                
-- stdout --
	{"Name":"pause-182738","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-182738","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-182738 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.67s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.79s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-182738 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.79s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.59s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-182738 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-182738 --alsologtostderr -v=5: (2.590811453s)
--- PASS: TestPause/serial/DeletePaused (2.59s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (16.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (16.214049579s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-182738
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-182738: exit status 1 (21.523055ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-182738: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (16.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (42.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-362672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-362672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (42.452593699s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (42.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (51.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-362672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-362672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (51.040131601s)
--- PASS: TestNetworkPlugins/group/flannel/Start (51.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-jpw8v" [e00d4aa6-ee91-49a0-ae36-d2406cffcc6f] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004119968s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-362672 "pgrep -a kubelet"
I0904 04:48:33.119905  389671 config.go:182] Loaded profile config "calico-362672": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-362672 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9bx7k" [02de6540-eeb7-49a9-bcac-e8766ffa8e09] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9bx7k" [02de6540-eeb7-49a9-bcac-e8766ffa8e09] Running
I0904 04:48:36.629784  389671 config.go:182] Loaded profile config "custom-flannel-362672": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.00382493s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-362672 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-362672 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ts56m" [2d4f12ea-5b35-4966-ad08-19507e83c5b5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ts56m" [2d4f12ea-5b35-4966-ad08-19507e83c5b5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003725998s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-362672 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-362672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-362672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-362672 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-362672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-362672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-pqvqs" [a58918d1-521a-4168-913e-8ac2c2b595ea] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003247223s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-362672 "pgrep -a kubelet"
I0904 04:49:01.644007  389671 config.go:182] Loaded profile config "flannel-362672": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-362672 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vw9r6" [8e240936-c5cd-4128-a2c4-6c13e0a57955] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vw9r6" [8e240936-c5cd-4128-a2c4-6c13e0a57955] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004022177s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (43.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-362672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-362672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (43.70752634s)
--- PASS: TestNetworkPlugins/group/bridge/Start (43.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (69.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-362672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-362672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m9.781429092s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (69.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-362672 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-362672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-362672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (56.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-152768 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-152768 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (56.438104234s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (56.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (68.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-251788 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-251788 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (1m8.289198065s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (68.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-362672 "pgrep -a kubelet"
I0904 04:49:46.042911  389671 config.go:182] Loaded profile config "bridge-362672": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-362672 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jdpz5" [9cf51944-dbfe-4152-87c6-2d4c028d5e83] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jdpz5" [9cf51944-dbfe-4152-87c6-2d4c028d5e83] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004038779s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-362672 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-362672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-362672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (46.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-174515 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-174515 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (46.468811097s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (46.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-362672 "pgrep -a kubelet"
I0904 04:50:18.483034  389671 config.go:182] Loaded profile config "enable-default-cni-362672": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-362672 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-d425h" [d34bf0b4-0f58-4769-a522-3444055e8462] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-d425h" [d34bf0b4-0f58-4769-a522-3444055e8462] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003164782s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-152768 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [38644972-1b41-4872-8d40-abea851c0e71] Pending
helpers_test.go:352: "busybox" [38644972-1b41-4872-8d40-abea851c0e71] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [38644972-1b41-4872-8d40-abea851c0e71] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003728143s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-152768 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-362672 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-362672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-362672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)
E0904 04:52:12.297547  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/auto-362672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-152768 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-152768 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.008835801s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-152768 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-152768 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-152768 --alsologtostderr -v=3: (12.021463557s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-251788 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [60015e5c-94c5-434f-a1d9-924b1d30ba2e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [60015e5c-94c5-434f-a1d9-924b1d30ba2e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004079568s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-251788 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-664161 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-664161 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (42.588965228s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-152768 -n old-k8s-version-152768
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-152768 -n old-k8s-version-152768: exit status 7 (77.417195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-152768 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (52.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-152768 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-152768 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (52.556569357s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-152768 -n old-k8s-version-152768
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (52.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-251788 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-251788 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.756935656s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-251788 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-251788 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-251788 --alsologtostderr -v=3: (12.380815531s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-174515 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [df9bcf09-67b3-49c1-add4-e204a46bf703] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [df9bcf09-67b3-49c1-add4-e204a46bf703] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003411028s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-174515 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-251788 -n no-preload-251788
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-251788 -n no-preload-251788: exit status 7 (83.98348ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-251788 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (47.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-251788 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
E0904 04:51:06.254643  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/addons-919243/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-251788 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (47.403063058s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-251788 -n no-preload-251788
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (47.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-174515 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-174515 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-174515 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-174515 --alsologtostderr -v=3: (13.015456826s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-174515 -n embed-certs-174515
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-174515 -n embed-certs-174515: exit status 7 (70.158146ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-174515 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (47.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-174515 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-174515 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (46.872735753s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-174515 -n embed-certs-174515
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (47.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-664161 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1e47a018-d9f4-49a5-b712-f323e5410962] Pending
helpers_test.go:352: "busybox" [1e47a018-d9f4-49a5-b712-f323e5410962] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1e47a018-d9f4-49a5-b712-f323e5410962] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003096892s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-664161 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-664161 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-664161 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-664161 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-664161 --alsologtostderr -v=3: (11.938415443s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-94g52" [d18ef84c-9586-4207-9caf-e1f5b9a4a745] Running
E0904 04:51:43.843959  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/functional-874981/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003366821s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-94g52" [d18ef84c-9586-4207-9caf-e1f5b9a4a745] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003958053s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-152768 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-568z8" [adfad769-e8f3-45d5-9d74-85f5a546a1ad] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003724959s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-152768 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-152768 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-152768 -n old-k8s-version-152768
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-152768 -n old-k8s-version-152768: exit status 2 (314.780775ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-152768 -n old-k8s-version-152768
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-152768 -n old-k8s-version-152768: exit status 2 (304.923921ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-152768 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-152768 -n old-k8s-version-152768
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-152768 -n old-k8s-version-152768
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-664161 -n default-k8s-diff-port-664161
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-664161 -n default-k8s-diff-port-664161: exit status 7 (74.020463ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-664161 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-664161 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-664161 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (50.883083579s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-664161 -n default-k8s-diff-port-664161
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-568z8" [adfad769-e8f3-45d5-9d74-85f5a546a1ad] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003558936s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-251788 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (33.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-114073 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-114073 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (33.410599675s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (33.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-251788 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-251788 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-251788 --alsologtostderr -v=1: (1.17396624s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-251788 -n no-preload-251788
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-251788 -n no-preload-251788: exit status 2 (315.246407ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-251788 -n no-preload-251788
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-251788 -n no-preload-251788: exit status 2 (379.88611ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-251788 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-251788 --alsologtostderr -v=1: (1.06136044s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-251788 -n no-preload-251788
E0904 04:52:07.167174  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/auto-362672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:52:07.173549  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/auto-362672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:52:07.184979  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/auto-362672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:52:07.206402  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/auto-362672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:52:07.248096  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/auto-362672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:52:07.329622  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/auto-362672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-251788 -n no-preload-251788
E0904 04:52:07.491704  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/auto-362672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-wwbjk" [34537654-ee94-4572-85d8-46e1cda44a0e] Running
E0904 04:52:17.418943  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/auto-362672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:52:18.703000  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/kindnet-362672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:52:18.709382  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/kindnet-362672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:52:18.720784  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/kindnet-362672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:52:18.742131  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/kindnet-362672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:52:18.783548  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/kindnet-362672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:52:18.864988  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/kindnet-362672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:52:19.026454  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/kindnet-362672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:52:19.348053  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/kindnet-362672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:52:19.990095  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/kindnet-362672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003602535s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-wwbjk" [34537654-ee94-4572-85d8-46e1cda44a0e] Running
E0904 04:52:21.272169  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/kindnet-362672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 04:52:23.833592  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/kindnet-362672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002785187s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-174515 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-174515 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-174515 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-174515 -n embed-certs-174515
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-174515 -n embed-certs-174515: exit status 2 (317.815266ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-174515 -n embed-certs-174515
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-174515 -n embed-certs-174515: exit status 2 (298.396309ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-174515 --alsologtostderr -v=1
E0904 04:52:27.660783  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/auto-362672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-174515 -n embed-certs-174515
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-174515 -n embed-certs-174515
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-114073 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-114073 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-114073 --alsologtostderr -v=3: (1.190669147s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-114073 -n newest-cni-114073
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-114073 -n newest-cni-114073: exit status 7 (67.116482ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-114073 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (14.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-114073 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
E0904 04:52:39.197184  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/kindnet-362672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-114073 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (13.992098127s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-114073 -n newest-cni-114073
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (14.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qqgvn" [28947646-854e-43f1-85e6-17b57600cf03] Running
E0904 04:52:48.142926  389671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-385918/.minikube/profiles/auto-362672/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00375014s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-114073 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-114073 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-114073 -n newest-cni-114073
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-114073 -n newest-cni-114073: exit status 2 (289.358742ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-114073 -n newest-cni-114073
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-114073 -n newest-cni-114073: exit status 2 (278.653714ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-114073 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-114073 -n newest-cni-114073
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-114073 -n newest-cni-114073
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qqgvn" [28947646-854e-43f1-85e6-17b57600cf03] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003318301s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-664161 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-664161 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-664161 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-664161 -n default-k8s-diff-port-664161
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-664161 -n default-k8s-diff-port-664161: exit status 2 (276.314893ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-664161 -n default-k8s-diff-port-664161
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-664161 -n default-k8s-diff-port-664161: exit status 2 (279.847518ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-664161 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-664161 -n default-k8s-diff-port-664161
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-664161 -n default-k8s-diff-port-664161
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.58s)

                                                
                                    

Test skip (25/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-362672 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-362672

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-362672

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-362672

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-362672

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-362672

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-362672

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-362672

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-362672

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-362672

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-362672

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-362672

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-362672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-362672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-362672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-362672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-362672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-362672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-362672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-362672" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-362672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-362672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-362672" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-362672

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-362672"

                                                
                                                
----------------------- debugLogs end: kubenet-362672 [took: 3.875819253s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-362672" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-362672
--- SKIP: TestNetworkPlugins/group/kubenet (4.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-362672 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-362672

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-362672

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-362672

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-362672

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-362672

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-362672

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-362672

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-362672

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-362672

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-362672

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-362672

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-362672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-362672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-362672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-362672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-362672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-362672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-362672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-362672" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-362672

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-362672

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-362672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-362672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-362672

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-362672

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-362672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-362672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-362672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-362672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-362672" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-362672

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-362672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-362672"

                                                
                                                
----------------------- debugLogs end: cilium-362672 [took: 3.891563622s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-362672" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-362672
--- SKIP: TestNetworkPlugins/group/cilium (4.06s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-720640" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-720640
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard