Test Report: Docker_Linux_containerd_arm64 21682

                    
                      7a7892355cfa060afe2cc9d2507b1d1308b66169:2025-10-02:41740
                    
                

Test fail (4/332)

Order failed test Duration
54 TestDockerEnvContainerd 48.81
91 TestFunctional/parallel/DashboardCmd 302.55
98 TestFunctional/parallel/ServiceCmdConnect 604.35
100 TestFunctional/parallel/PersistentVolumeClaim 248.53
x
+
TestDockerEnvContainerd (48.81s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-775346 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-775346 --driver=docker  --container-runtime=containerd: (31.079275264s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-775346"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-775346": (1.095341208s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-j1Y53nsAbvDS/agent.2805741" SSH_AGENT_PID="2805742" DOCKER_HOST=ssh://docker@127.0.0.1:36117 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-j1Y53nsAbvDS/agent.2805741" SSH_AGENT_PID="2805742" DOCKER_HOST=ssh://docker@127.0.0.1:36117 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Non-zero exit: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-j1Y53nsAbvDS/agent.2805741" SSH_AGENT_PID="2805742" DOCKER_HOST=ssh://docker@127.0.0.1:36117 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": exit status 1 (975.155297ms)

                                                
                                                
-- stdout --
	Sending build context to Docker daemon  2.048kB

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            BuildKit is currently disabled; enable it by removing the DOCKER_BUILDKIT=0
	            environment-variable.
	
	Error response from daemon: exit status 1

                                                
                                                
** /stderr **
docker_test.go:245: failed to build images, error: exit status 1, output:
-- stdout --
	Sending build context to Docker daemon  2.048kB

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            BuildKit is currently disabled; enable it by removing the DOCKER_BUILDKIT=0
	            environment-variable.
	
	Error response from daemon: exit status 1

                                                
                                                
** /stderr **
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-j1Y53nsAbvDS/agent.2805741" SSH_AGENT_PID="2805742" DOCKER_HOST=ssh://docker@127.0.0.1:36117 docker image ls"
docker_test.go:255: failed to detect image 'local/minikube-dockerenv-containerd-test' in output of docker image ls
panic.go:636: *** TestDockerEnvContainerd FAILED at 2025-10-02 20:59:46.216911048 +0000 UTC m=+450.981987465
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestDockerEnvContainerd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestDockerEnvContainerd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect dockerenv-775346
helpers_test.go:243: (dbg) docker inspect dockerenv-775346:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "739a435e5129394374fc41d7be340e61f8f9faf70bd03826910d7f9c4c1f3dfb",
	        "Created": "2025-10-02T20:59:07.011043266Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2803414,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:59:07.080612474Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/739a435e5129394374fc41d7be340e61f8f9faf70bd03826910d7f9c4c1f3dfb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/739a435e5129394374fc41d7be340e61f8f9faf70bd03826910d7f9c4c1f3dfb/hostname",
	        "HostsPath": "/var/lib/docker/containers/739a435e5129394374fc41d7be340e61f8f9faf70bd03826910d7f9c4c1f3dfb/hosts",
	        "LogPath": "/var/lib/docker/containers/739a435e5129394374fc41d7be340e61f8f9faf70bd03826910d7f9c4c1f3dfb/739a435e5129394374fc41d7be340e61f8f9faf70bd03826910d7f9c4c1f3dfb-json.log",
	        "Name": "/dockerenv-775346",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "dockerenv-775346:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "dockerenv-775346",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "739a435e5129394374fc41d7be340e61f8f9faf70bd03826910d7f9c4c1f3dfb",
	                "LowerDir": "/var/lib/docker/overlay2/d3a7e0770a4cc5a4a0aaeccc45ac25fd2bba799e559577eb0bd747692d1aae4f-init/diff:/var/lib/docker/overlay2/51331203fb22f22857c79ac4aca1f3d12d523fa3ef805f7f258c2d1849e728ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d3a7e0770a4cc5a4a0aaeccc45ac25fd2bba799e559577eb0bd747692d1aae4f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d3a7e0770a4cc5a4a0aaeccc45ac25fd2bba799e559577eb0bd747692d1aae4f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d3a7e0770a4cc5a4a0aaeccc45ac25fd2bba799e559577eb0bd747692d1aae4f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "dockerenv-775346",
	                "Source": "/var/lib/docker/volumes/dockerenv-775346/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "dockerenv-775346",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "dockerenv-775346",
	                "name.minikube.sigs.k8s.io": "dockerenv-775346",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "67621488a9e1741cb12668c061c12ad684d0a96998fa69be5905d0dad8fdc318",
	            "SandboxKey": "/var/run/docker/netns/67621488a9e1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36117"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36118"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36121"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36119"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36120"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "dockerenv-775346": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:4d:99:40:6e:b4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "131a0fc9f153f6aafeb418a164a9cc1920e87a5107e75c0b7f4840fad8ec7a6a",
	                    "EndpointID": "19bdd29dd2aece5a3d7dfc6da4c22e2b04f333ac183bc5b1178ef75fdd75d46d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "dockerenv-775346",
	                        "739a435e5129"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p dockerenv-775346 -n dockerenv-775346
helpers_test.go:252: <<< TestDockerEnvContainerd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestDockerEnvContainerd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p dockerenv-775346 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p dockerenv-775346 logs -n 25: (1.024564542s)
helpers_test.go:260: TestDockerEnvContainerd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND   │                                                       ARGS                                                        │     PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ip         │ addons-774992 ip                                                                                                  │ addons-774992    │ jenkins │ v1.37.0 │ 02 Oct 25 20:57 UTC │ 02 Oct 25 20:57 UTC │
	│ addons     │ addons-774992 addons disable registry --alsologtostderr -v=1                                                      │ addons-774992    │ jenkins │ v1.37.0 │ 02 Oct 25 20:57 UTC │ 02 Oct 25 20:57 UTC │
	│ addons     │ addons-774992 addons disable nvidia-device-plugin --alsologtostderr -v=1                                          │ addons-774992    │ jenkins │ v1.37.0 │ 02 Oct 25 20:57 UTC │ 02 Oct 25 20:57 UTC │
	│ ssh        │ addons-774992 ssh cat /opt/local-path-provisioner/pvc-a72c6780-abc2-4dc3-9d6e-db75a010a533_default_test-pvc/file1 │ addons-774992    │ jenkins │ v1.37.0 │ 02 Oct 25 20:57 UTC │ 02 Oct 25 20:57 UTC │
	│ addons     │ addons-774992 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                   │ addons-774992    │ jenkins │ v1.37.0 │ 02 Oct 25 20:57 UTC │ 02 Oct 25 20:58 UTC │
	│ addons     │ addons-774992 addons disable volumesnapshots --alsologtostderr -v=1                                               │ addons-774992    │ jenkins │ v1.37.0 │ 02 Oct 25 20:57 UTC │ 02 Oct 25 20:57 UTC │
	│ addons     │ addons-774992 addons disable csi-hostpath-driver --alsologtostderr -v=1                                           │ addons-774992    │ jenkins │ v1.37.0 │ 02 Oct 25 20:57 UTC │ 02 Oct 25 20:58 UTC │
	│ addons     │ addons-774992 addons disable cloud-spanner --alsologtostderr -v=1                                                 │ addons-774992    │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
	│ addons     │ enable headlamp -p addons-774992 --alsologtostderr -v=1                                                           │ addons-774992    │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
	│ addons     │ addons-774992 addons disable inspektor-gadget --alsologtostderr -v=1                                              │ addons-774992    │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
	│ addons     │ addons-774992 addons disable headlamp --alsologtostderr -v=1                                                      │ addons-774992    │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
	│ addons     │ addons-774992 addons disable metrics-server --alsologtostderr -v=1                                                │ addons-774992    │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
	│ addons     │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-774992                                    │ addons-774992    │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
	│ addons     │ addons-774992 addons disable registry-creds --alsologtostderr -v=1                                                │ addons-774992    │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
	│ ssh        │ addons-774992 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                          │ addons-774992    │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
	│ ip         │ addons-774992 ip                                                                                                  │ addons-774992    │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
	│ addons     │ addons-774992 addons disable ingress-dns --alsologtostderr -v=1                                                   │ addons-774992    │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
	│ addons     │ addons-774992 addons disable ingress --alsologtostderr -v=1                                                       │ addons-774992    │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
	│ stop       │ -p addons-774992                                                                                                  │ addons-774992    │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
	│ addons     │ enable dashboard -p addons-774992                                                                                 │ addons-774992    │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
	│ addons     │ disable dashboard -p addons-774992                                                                                │ addons-774992    │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
	│ addons     │ disable gvisor -p addons-774992                                                                                   │ addons-774992    │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
	│ delete     │ -p addons-774992                                                                                                  │ addons-774992    │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:59 UTC │
	│ start      │ -p dockerenv-775346 --driver=docker  --container-runtime=containerd                                               │ dockerenv-775346 │ jenkins │ v1.37.0 │ 02 Oct 25 20:59 UTC │ 02 Oct 25 20:59 UTC │
	│ docker-env │ --ssh-host --ssh-add -p dockerenv-775346                                                                          │ dockerenv-775346 │ jenkins │ v1.37.0 │ 02 Oct 25 20:59 UTC │ 02 Oct 25 20:59 UTC │
	└────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:59:01
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:59:01.671324 2803030 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:59:01.671424 2803030 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:59:01.671427 2803030 out.go:374] Setting ErrFile to fd 2...
	I1002 20:59:01.671431 2803030 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:59:01.671785 2803030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-2783765/.minikube/bin
	I1002 20:59:01.672246 2803030 out.go:368] Setting JSON to false
	I1002 20:59:01.674095 2803030 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":60091,"bootTime":1759378651,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1002 20:59:01.674156 2803030 start.go:140] virtualization:  
	I1002 20:59:01.679060 2803030 out.go:179] * [dockerenv-775346] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 20:59:01.684310 2803030 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 20:59:01.684354 2803030 notify.go:220] Checking for updates...
	I1002 20:59:01.688139 2803030 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:59:01.691689 2803030 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-2783765/kubeconfig
	I1002 20:59:01.695215 2803030 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-2783765/.minikube
	I1002 20:59:01.698553 2803030 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 20:59:01.701917 2803030 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:59:01.705425 2803030 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 20:59:01.736855 2803030 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 20:59:01.736974 2803030 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:59:01.797634 2803030 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 20:59:01.788187191 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:59:01.797730 2803030 docker.go:318] overlay module found
	I1002 20:59:01.801319 2803030 out.go:179] * Using the docker driver based on user configuration
	I1002 20:59:01.804397 2803030 start.go:304] selected driver: docker
	I1002 20:59:01.804404 2803030 start.go:924] validating driver "docker" against <nil>
	I1002 20:59:01.804416 2803030 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:59:01.804534 2803030 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:59:01.865425 2803030 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 20:59:01.856520687 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:59:01.865560 2803030 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:59:01.865816 2803030 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1002 20:59:01.865973 2803030 start_flags.go:984] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 20:59:01.869083 2803030 out.go:179] * Using Docker driver with root privileges
	I1002 20:59:01.872260 2803030 cni.go:84] Creating CNI manager for ""
	I1002 20:59:01.872322 2803030 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 20:59:01.872329 2803030 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 20:59:01.872415 2803030 start.go:348] cluster config:
	{Name:dockerenv-775346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:dockerenv-775346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:59:01.875711 2803030 out.go:179] * Starting "dockerenv-775346" primary control-plane node in "dockerenv-775346" cluster
	I1002 20:59:01.878739 2803030 cache.go:123] Beginning downloading kic base image for docker with containerd
	I1002 20:59:01.881864 2803030 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:59:01.884810 2803030 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1002 20:59:01.884876 2803030 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-2783765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1002 20:59:01.884884 2803030 cache.go:58] Caching tarball of preloaded images
	I1002 20:59:01.884902 2803030 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:59:01.884989 2803030 preload.go:233] Found /home/jenkins/minikube-integration/21682-2783765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 20:59:01.884998 2803030 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1002 20:59:01.885346 2803030 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/config.json ...
	I1002 20:59:01.885366 2803030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/config.json: {Name:mk067fa1d4bccb53f2d40a39c10ea94b3afa03dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:59:01.908922 2803030 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:59:01.908934 2803030 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:59:01.908947 2803030 cache.go:232] Successfully downloaded all kic artifacts
	I1002 20:59:01.908967 2803030 start.go:360] acquireMachinesLock for dockerenv-775346: {Name:mkc978960753899d4d97eb2f18d1d9c1e4a59ed3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:59:01.909771 2803030 start.go:364] duration metric: took 783.612µs to acquireMachinesLock for "dockerenv-775346"
	I1002 20:59:01.909805 2803030 start.go:93] Provisioning new machine with config: &{Name:dockerenv-775346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:dockerenv-775346 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1002 20:59:01.909873 2803030 start.go:125] createHost starting for "" (driver="docker")
	I1002 20:59:01.913336 2803030 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 20:59:01.913592 2803030 start.go:159] libmachine.API.Create for "dockerenv-775346" (driver="docker")
	I1002 20:59:01.913633 2803030 client.go:168] LocalClient.Create starting
	I1002 20:59:01.913706 2803030 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca.pem
	I1002 20:59:01.913741 2803030 main.go:141] libmachine: Decoding PEM data...
	I1002 20:59:01.913753 2803030 main.go:141] libmachine: Parsing certificate...
	I1002 20:59:01.913808 2803030 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/cert.pem
	I1002 20:59:01.913839 2803030 main.go:141] libmachine: Decoding PEM data...
	I1002 20:59:01.913852 2803030 main.go:141] libmachine: Parsing certificate...
	I1002 20:59:01.914242 2803030 cli_runner.go:164] Run: docker network inspect dockerenv-775346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 20:59:01.930731 2803030 cli_runner.go:211] docker network inspect dockerenv-775346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 20:59:01.930797 2803030 network_create.go:284] running [docker network inspect dockerenv-775346] to gather additional debugging logs...
	I1002 20:59:01.930811 2803030 cli_runner.go:164] Run: docker network inspect dockerenv-775346
	W1002 20:59:01.947446 2803030 cli_runner.go:211] docker network inspect dockerenv-775346 returned with exit code 1
	I1002 20:59:01.947466 2803030 network_create.go:287] error running [docker network inspect dockerenv-775346]: docker network inspect dockerenv-775346: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network dockerenv-775346 not found
	I1002 20:59:01.947478 2803030 network_create.go:289] output of [docker network inspect dockerenv-775346]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network dockerenv-775346 not found
	
	** /stderr **
	I1002 20:59:01.947572 2803030 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:59:01.964004 2803030 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001867db0}
	I1002 20:59:01.964032 2803030 network_create.go:124] attempt to create docker network dockerenv-775346 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 20:59:01.964094 2803030 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=dockerenv-775346 dockerenv-775346
	I1002 20:59:02.030036 2803030 network_create.go:108] docker network dockerenv-775346 192.168.49.0/24 created
	I1002 20:59:02.030059 2803030 kic.go:121] calculated static IP "192.168.49.2" for the "dockerenv-775346" container
	I1002 20:59:02.030134 2803030 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 20:59:02.045096 2803030 cli_runner.go:164] Run: docker volume create dockerenv-775346 --label name.minikube.sigs.k8s.io=dockerenv-775346 --label created_by.minikube.sigs.k8s.io=true
	I1002 20:59:02.063083 2803030 oci.go:103] Successfully created a docker volume dockerenv-775346
	I1002 20:59:02.063161 2803030 cli_runner.go:164] Run: docker run --rm --name dockerenv-775346-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-775346 --entrypoint /usr/bin/test -v dockerenv-775346:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 20:59:02.600678 2803030 oci.go:107] Successfully prepared a docker volume dockerenv-775346
	I1002 20:59:02.600714 2803030 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1002 20:59:02.600731 2803030 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 20:59:02.600803 2803030 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-2783765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v dockerenv-775346:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 20:59:06.942159 2803030 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-2783765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v dockerenv-775346:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.341302563s)
	I1002 20:59:06.942181 2803030 kic.go:203] duration metric: took 4.341445351s to extract preloaded images to volume ...
	W1002 20:59:06.942611 2803030 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 20:59:06.942725 2803030 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 20:59:06.995408 2803030 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname dockerenv-775346 --name dockerenv-775346 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-775346 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=dockerenv-775346 --network dockerenv-775346 --ip 192.168.49.2 --volume dockerenv-775346:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 20:59:07.306844 2803030 cli_runner.go:164] Run: docker container inspect dockerenv-775346 --format={{.State.Running}}
	I1002 20:59:07.326145 2803030 cli_runner.go:164] Run: docker container inspect dockerenv-775346 --format={{.State.Status}}
	I1002 20:59:07.352401 2803030 cli_runner.go:164] Run: docker exec dockerenv-775346 stat /var/lib/dpkg/alternatives/iptables
	I1002 20:59:07.399073 2803030 oci.go:144] the created container "dockerenv-775346" has a running status.
	I1002 20:59:07.399102 2803030 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21682-2783765/.minikube/machines/dockerenv-775346/id_rsa...
	I1002 20:59:07.721363 2803030 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21682-2783765/.minikube/machines/dockerenv-775346/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 20:59:07.745626 2803030 cli_runner.go:164] Run: docker container inspect dockerenv-775346 --format={{.State.Status}}
	I1002 20:59:07.775587 2803030 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 20:59:07.775599 2803030 kic_runner.go:114] Args: [docker exec --privileged dockerenv-775346 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 20:59:07.855965 2803030 cli_runner.go:164] Run: docker container inspect dockerenv-775346 --format={{.State.Status}}
	I1002 20:59:07.878672 2803030 machine.go:93] provisionDockerMachine start ...
	I1002 20:59:07.878817 2803030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-775346
	I1002 20:59:07.903567 2803030 main.go:141] libmachine: Using SSH client type: native
	I1002 20:59:07.903902 2803030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36117 <nil> <nil>}
	I1002 20:59:07.903910 2803030 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:59:07.904439 2803030 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40502->127.0.0.1:36117: read: connection reset by peer
	I1002 20:59:11.035010 2803030 main.go:141] libmachine: SSH cmd err, output: <nil>: dockerenv-775346
	
	I1002 20:59:11.035025 2803030 ubuntu.go:182] provisioning hostname "dockerenv-775346"
	I1002 20:59:11.035085 2803030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-775346
	I1002 20:59:11.058812 2803030 main.go:141] libmachine: Using SSH client type: native
	I1002 20:59:11.059149 2803030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36117 <nil> <nil>}
	I1002 20:59:11.059158 2803030 main.go:141] libmachine: About to run SSH command:
	sudo hostname dockerenv-775346 && echo "dockerenv-775346" | sudo tee /etc/hostname
	I1002 20:59:11.201403 2803030 main.go:141] libmachine: SSH cmd err, output: <nil>: dockerenv-775346
	
	I1002 20:59:11.201472 2803030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-775346
	I1002 20:59:11.219623 2803030 main.go:141] libmachine: Using SSH client type: native
	I1002 20:59:11.219947 2803030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36117 <nil> <nil>}
	I1002 20:59:11.219962 2803030 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdockerenv-775346' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 dockerenv-775346/g' /etc/hosts;
				else 
					echo '127.0.1.1 dockerenv-775346' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:59:11.351549 2803030 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:59:11.351568 2803030 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-2783765/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-2783765/.minikube}
	I1002 20:59:11.351591 2803030 ubuntu.go:190] setting up certificates
	I1002 20:59:11.351600 2803030 provision.go:84] configureAuth start
	I1002 20:59:11.351657 2803030 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-775346
	I1002 20:59:11.367935 2803030 provision.go:143] copyHostCerts
	I1002 20:59:11.367994 2803030 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-2783765/.minikube/ca.pem, removing ...
	I1002 20:59:11.368002 2803030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-2783765/.minikube/ca.pem
	I1002 20:59:11.368082 2803030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-2783765/.minikube/ca.pem (1078 bytes)
	I1002 20:59:11.368171 2803030 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-2783765/.minikube/cert.pem, removing ...
	I1002 20:59:11.368175 2803030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-2783765/.minikube/cert.pem
	I1002 20:59:11.368198 2803030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-2783765/.minikube/cert.pem (1123 bytes)
	I1002 20:59:11.368245 2803030 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-2783765/.minikube/key.pem, removing ...
	I1002 20:59:11.368248 2803030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-2783765/.minikube/key.pem
	I1002 20:59:11.368279 2803030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-2783765/.minikube/key.pem (1675 bytes)
	I1002 20:59:11.368326 2803030 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca-key.pem org=jenkins.dockerenv-775346 san=[127.0.0.1 192.168.49.2 dockerenv-775346 localhost minikube]
	I1002 20:59:11.541388 2803030 provision.go:177] copyRemoteCerts
	I1002 20:59:11.541447 2803030 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:59:11.541484 2803030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-775346
	I1002 20:59:11.558660 2803030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36117 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/dockerenv-775346/id_rsa Username:docker}
	I1002 20:59:11.655096 2803030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 20:59:11.673547 2803030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1002 20:59:11.691398 2803030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 20:59:11.708786 2803030 provision.go:87] duration metric: took 357.161611ms to configureAuth
	I1002 20:59:11.708802 2803030 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:59:11.708986 2803030 config.go:182] Loaded profile config "dockerenv-775346": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 20:59:11.708991 2803030 machine.go:96] duration metric: took 3.830309767s to provisionDockerMachine
	I1002 20:59:11.708996 2803030 client.go:171] duration metric: took 9.795359097s to LocalClient.Create
	I1002 20:59:11.709034 2803030 start.go:167] duration metric: took 9.795431393s to libmachine.API.Create "dockerenv-775346"
	I1002 20:59:11.709041 2803030 start.go:293] postStartSetup for "dockerenv-775346" (driver="docker")
	I1002 20:59:11.709049 2803030 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:59:11.709106 2803030 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:59:11.709146 2803030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-775346
	I1002 20:59:11.725982 2803030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36117 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/dockerenv-775346/id_rsa Username:docker}
	I1002 20:59:11.823260 2803030 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:59:11.826367 2803030 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:59:11.826384 2803030 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:59:11.826394 2803030 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-2783765/.minikube/addons for local assets ...
	I1002 20:59:11.826449 2803030 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-2783765/.minikube/files for local assets ...
	I1002 20:59:11.826467 2803030 start.go:296] duration metric: took 117.421232ms for postStartSetup
	I1002 20:59:11.826773 2803030 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-775346
	I1002 20:59:11.842967 2803030 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/config.json ...
	I1002 20:59:11.843245 2803030 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:59:11.843308 2803030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-775346
	I1002 20:59:11.859881 2803030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36117 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/dockerenv-775346/id_rsa Username:docker}
	I1002 20:59:11.952845 2803030 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:59:11.957530 2803030 start.go:128] duration metric: took 10.047643757s to createHost
	I1002 20:59:11.957543 2803030 start.go:83] releasing machines lock for "dockerenv-775346", held for 10.047760952s
	I1002 20:59:11.957621 2803030 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-775346
	I1002 20:59:11.973891 2803030 ssh_runner.go:195] Run: cat /version.json
	I1002 20:59:11.973936 2803030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-775346
	I1002 20:59:11.974185 2803030 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:59:11.974238 2803030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-775346
	I1002 20:59:11.993173 2803030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36117 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/dockerenv-775346/id_rsa Username:docker}
	I1002 20:59:11.993625 2803030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36117 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/dockerenv-775346/id_rsa Username:docker}
	I1002 20:59:12.175449 2803030 ssh_runner.go:195] Run: systemctl --version
	I1002 20:59:12.182024 2803030 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:59:12.186532 2803030 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:59:12.186592 2803030 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:59:12.217077 2803030 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 20:59:12.217090 2803030 start.go:495] detecting cgroup driver to use...
	I1002 20:59:12.217132 2803030 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 20:59:12.217184 2803030 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1002 20:59:12.232020 2803030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 20:59:12.245277 2803030 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:59:12.245336 2803030 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:59:12.262770 2803030 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:59:12.281237 2803030 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:59:12.404819 2803030 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:59:12.529325 2803030 docker.go:234] disabling docker service ...
	I1002 20:59:12.529380 2803030 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:59:12.551153 2803030 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:59:12.564545 2803030 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:59:12.681253 2803030 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:59:12.807131 2803030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:59:12.819761 2803030 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:59:12.834998 2803030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1002 20:59:12.844551 2803030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 20:59:12.853096 2803030 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 20:59:12.853153 2803030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 20:59:12.861856 2803030 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 20:59:12.871160 2803030 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 20:59:12.879672 2803030 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 20:59:12.888240 2803030 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:59:12.896481 2803030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 20:59:12.904979 2803030 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1002 20:59:12.913668 2803030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1002 20:59:12.922315 2803030 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:59:12.929751 2803030 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:59:12.937259 2803030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:59:13.056876 2803030 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 20:59:13.201401 2803030 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1002 20:59:13.201480 2803030 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1002 20:59:13.205718 2803030 start.go:563] Will wait 60s for crictl version
	I1002 20:59:13.205804 2803030 ssh_runner.go:195] Run: which crictl
	I1002 20:59:13.209445 2803030 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:59:13.242985 2803030 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.28
	RuntimeApiVersion:  v1
	I1002 20:59:13.243051 2803030 ssh_runner.go:195] Run: containerd --version
	I1002 20:59:13.266250 2803030 ssh_runner.go:195] Run: containerd --version
	I1002 20:59:13.292452 2803030 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
	I1002 20:59:13.295463 2803030 cli_runner.go:164] Run: docker network inspect dockerenv-775346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:59:13.309817 2803030 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:59:13.313323 2803030 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:59:13.322582 2803030 kubeadm.go:883] updating cluster {Name:dockerenv-775346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:dockerenv-775346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:59:13.322683 2803030 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1002 20:59:13.322744 2803030 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:59:13.348718 2803030 containerd.go:627] all images are preloaded for containerd runtime.
	I1002 20:59:13.348730 2803030 containerd.go:534] Images already preloaded, skipping extraction
	I1002 20:59:13.348789 2803030 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:59:13.373373 2803030 containerd.go:627] all images are preloaded for containerd runtime.
	I1002 20:59:13.373385 2803030 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:59:13.373391 2803030 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 containerd true true} ...
	I1002 20:59:13.373494 2803030 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=dockerenv-775346 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:dockerenv-775346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:59:13.373554 2803030 ssh_runner.go:195] Run: sudo crictl info
	I1002 20:59:13.398846 2803030 cni.go:84] Creating CNI manager for ""
	I1002 20:59:13.398856 2803030 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 20:59:13.398870 2803030 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:59:13.398890 2803030 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:dockerenv-775346 NodeName:dockerenv-775346 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:59:13.399007 2803030 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "dockerenv-775346"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:59:13.399072 2803030 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:59:13.407630 2803030 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:59:13.407691 2803030 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:59:13.415357 2803030 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I1002 20:59:13.428313 2803030 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:59:13.441479 2803030 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1002 20:59:13.454287 2803030 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:59:13.457843 2803030 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:59:13.467682 2803030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:59:13.583616 2803030 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:59:13.600252 2803030 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346 for IP: 192.168.49.2
	I1002 20:59:13.600263 2803030 certs.go:195] generating shared ca certs ...
	I1002 20:59:13.600287 2803030 certs.go:227] acquiring lock for ca certs: {Name:mk9dd0ab4a99d312fca91f03b1dec8574d28a55e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:59:13.600459 2803030 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/ca.key
	I1002 20:59:13.600511 2803030 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/proxy-client-ca.key
	I1002 20:59:13.600517 2803030 certs.go:257] generating profile certs ...
	I1002 20:59:13.600582 2803030 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/client.key
	I1002 20:59:13.600598 2803030 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/client.crt with IP's: []
	I1002 20:59:14.302356 2803030 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/client.crt ...
	I1002 20:59:14.302372 2803030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/client.crt: {Name:mkf4058f63a7f563447b3efb417d68ab79bee39f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:59:14.302572 2803030 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/client.key ...
	I1002 20:59:14.302578 2803030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/client.key: {Name:mke1efa342f8c4475fc37fae4c481852494f8fe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:59:14.302668 2803030 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/apiserver.key.ac0d1c47
	I1002 20:59:14.302679 2803030 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/apiserver.crt.ac0d1c47 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 20:59:15.255311 2803030 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/apiserver.crt.ac0d1c47 ...
	I1002 20:59:15.255333 2803030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/apiserver.crt.ac0d1c47: {Name:mkd6ff2ce78e54296bdd851b535714f0b3de5bc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:59:15.255531 2803030 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/apiserver.key.ac0d1c47 ...
	I1002 20:59:15.255541 2803030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/apiserver.key.ac0d1c47: {Name:mkb3e9b9904eea83f9d6e1e29864a6615a2b4440 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:59:15.256294 2803030 certs.go:382] copying /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/apiserver.crt.ac0d1c47 -> /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/apiserver.crt
	I1002 20:59:15.256374 2803030 certs.go:386] copying /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/apiserver.key.ac0d1c47 -> /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/apiserver.key
	I1002 20:59:15.256427 2803030 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/proxy-client.key
	I1002 20:59:15.256440 2803030 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/proxy-client.crt with IP's: []
	I1002 20:59:15.335793 2803030 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/proxy-client.crt ...
	I1002 20:59:15.335812 2803030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/proxy-client.crt: {Name:mkf4be235f119f719985f10d0eb72a856088bea2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:59:15.336024 2803030 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/proxy-client.key ...
	I1002 20:59:15.336031 2803030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/proxy-client.key: {Name:mkd6f13fd7656022f08234460605016139c2114e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:59:15.336220 2803030 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 20:59:15.336269 2803030 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca.pem (1078 bytes)
	I1002 20:59:15.336294 2803030 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:59:15.336315 2803030 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/key.pem (1675 bytes)
	I1002 20:59:15.336894 2803030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:59:15.355548 2803030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 20:59:15.372993 2803030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:59:15.390743 2803030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:59:15.408089 2803030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:59:15.424870 2803030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 20:59:15.441897 2803030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:59:15.459561 2803030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/dockerenv-775346/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:59:15.476637 2803030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:59:15.494187 2803030 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:59:15.506465 2803030 ssh_runner.go:195] Run: openssl version
	I1002 20:59:15.515752 2803030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:59:15.524954 2803030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:59:15.528544 2803030 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:53 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:59:15.528597 2803030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:59:15.570682 2803030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:59:15.579387 2803030 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:59:15.583698 2803030 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 20:59:15.583750 2803030 kubeadm.go:400] StartCluster: {Name:dockerenv-775346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:dockerenv-775346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:59:15.583821 2803030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1002 20:59:15.583890 2803030 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:59:15.611249 2803030 cri.go:89] found id: ""
	I1002 20:59:15.611340 2803030 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:59:15.619060 2803030 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:59:15.626716 2803030 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:59:15.626768 2803030 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:59:15.634435 2803030 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:59:15.634452 2803030 kubeadm.go:157] found existing configuration files:
	
	I1002 20:59:15.634503 2803030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:59:15.642055 2803030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:59:15.642115 2803030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:59:15.649437 2803030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:59:15.656723 2803030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:59:15.656794 2803030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:59:15.664171 2803030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:59:15.671623 2803030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:59:15.671677 2803030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:59:15.678703 2803030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:59:15.686065 2803030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:59:15.686135 2803030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:59:15.693340 2803030 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:59:15.755497 2803030 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 20:59:15.755763 2803030 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 20:59:15.822683 2803030 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:59:30.890674 2803030 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:59:30.890724 2803030 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:59:30.890823 2803030 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:59:30.890880 2803030 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 20:59:30.890915 2803030 kubeadm.go:318] OS: Linux
	I1002 20:59:30.890961 2803030 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:59:30.891010 2803030 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 20:59:30.891058 2803030 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:59:30.891109 2803030 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:59:30.891158 2803030 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:59:30.891208 2803030 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:59:30.891255 2803030 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:59:30.891313 2803030 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:59:30.891371 2803030 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 20:59:30.891445 2803030 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:59:30.891542 2803030 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:59:30.891634 2803030 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:59:30.891697 2803030 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:59:30.894497 2803030 out.go:252]   - Generating certificates and keys ...
	I1002 20:59:30.894596 2803030 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:59:30.894666 2803030 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:59:30.894740 2803030 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 20:59:30.894798 2803030 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 20:59:30.894860 2803030 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 20:59:30.894913 2803030 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 20:59:30.894969 2803030 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 20:59:30.895093 2803030 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [dockerenv-775346 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:59:30.895146 2803030 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 20:59:30.895268 2803030 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [dockerenv-775346 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:59:30.895356 2803030 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 20:59:30.895422 2803030 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 20:59:30.895467 2803030 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 20:59:30.895527 2803030 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:59:30.895579 2803030 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:59:30.895637 2803030 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:59:30.895692 2803030 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:59:30.895757 2803030 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:59:30.895813 2803030 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:59:30.895897 2803030 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:59:30.895965 2803030 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:59:30.898965 2803030 out.go:252]   - Booting up control plane ...
	I1002 20:59:30.899062 2803030 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:59:30.899165 2803030 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:59:30.899244 2803030 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:59:30.899412 2803030 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:59:30.899510 2803030 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:59:30.899618 2803030 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:59:30.899713 2803030 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:59:30.899753 2803030 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:59:30.899913 2803030 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:59:30.900033 2803030 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:59:30.900096 2803030 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.647584ms
	I1002 20:59:30.900194 2803030 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:59:30.900277 2803030 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:59:30.900369 2803030 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:59:30.900452 2803030 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:59:30.900531 2803030 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.832695021s
	I1002 20:59:30.900600 2803030 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.216675389s
	I1002 20:59:30.900670 2803030 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.501951443s
	I1002 20:59:30.900782 2803030 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 20:59:30.900911 2803030 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 20:59:30.900979 2803030 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 20:59:30.901178 2803030 kubeadm.go:318] [mark-control-plane] Marking the node dockerenv-775346 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 20:59:30.901235 2803030 kubeadm.go:318] [bootstrap-token] Using token: cu7jdy.ihu3q1moz9w9prz3
	I1002 20:59:30.904064 2803030 out.go:252]   - Configuring RBAC rules ...
	I1002 20:59:30.904182 2803030 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 20:59:30.904269 2803030 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 20:59:30.904443 2803030 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 20:59:30.904593 2803030 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 20:59:30.904715 2803030 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 20:59:30.904804 2803030 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 20:59:30.904923 2803030 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 20:59:30.904968 2803030 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 20:59:30.905015 2803030 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 20:59:30.905019 2803030 kubeadm.go:318] 
	I1002 20:59:30.905085 2803030 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 20:59:30.905088 2803030 kubeadm.go:318] 
	I1002 20:59:30.905168 2803030 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 20:59:30.905171 2803030 kubeadm.go:318] 
	I1002 20:59:30.905196 2803030 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 20:59:30.905257 2803030 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 20:59:30.905309 2803030 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 20:59:30.905312 2803030 kubeadm.go:318] 
	I1002 20:59:30.905368 2803030 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 20:59:30.905371 2803030 kubeadm.go:318] 
	I1002 20:59:30.905420 2803030 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 20:59:30.905423 2803030 kubeadm.go:318] 
	I1002 20:59:30.905476 2803030 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 20:59:30.905554 2803030 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 20:59:30.905624 2803030 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 20:59:30.905627 2803030 kubeadm.go:318] 
	I1002 20:59:30.905727 2803030 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 20:59:30.905807 2803030 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 20:59:30.905810 2803030 kubeadm.go:318] 
	I1002 20:59:30.905898 2803030 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token cu7jdy.ihu3q1moz9w9prz3 \
	I1002 20:59:30.906004 2803030 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:1398f01722b622f845548c7ec65fd7116bf0d2b59eb2ba444bbb109867d41495 \
	I1002 20:59:30.906024 2803030 kubeadm.go:318] 	--control-plane 
	I1002 20:59:30.906028 2803030 kubeadm.go:318] 
	I1002 20:59:30.906116 2803030 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 20:59:30.906119 2803030 kubeadm.go:318] 
	I1002 20:59:30.906204 2803030 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token cu7jdy.ihu3q1moz9w9prz3 \
	I1002 20:59:30.906324 2803030 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:1398f01722b622f845548c7ec65fd7116bf0d2b59eb2ba444bbb109867d41495 
	I1002 20:59:30.906331 2803030 cni.go:84] Creating CNI manager for ""
	I1002 20:59:30.906336 2803030 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 20:59:30.911118 2803030 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 20:59:30.913956 2803030 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 20:59:30.918708 2803030 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 20:59:30.918718 2803030 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 20:59:30.933013 2803030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 20:59:31.244338 2803030 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 20:59:31.244501 2803030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:59:31.244580 2803030 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes dockerenv-775346 minikube.k8s.io/updated_at=2025_10_02T20_59_31_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4 minikube.k8s.io/name=dockerenv-775346 minikube.k8s.io/primary=true
	I1002 20:59:31.355341 2803030 kubeadm.go:1113] duration metric: took 110.896881ms to wait for elevateKubeSystemPrivileges
	I1002 20:59:31.355361 2803030 ops.go:34] apiserver oom_adj: -16
	I1002 20:59:31.427980 2803030 kubeadm.go:402] duration metric: took 15.844227533s to StartCluster
	I1002 20:59:31.428004 2803030 settings.go:142] acquiring lock: {Name:mke92114e22bdbcff74119665eced9d6b9ac1b1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:59:31.428077 2803030 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-2783765/kubeconfig
	I1002 20:59:31.428722 2803030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-2783765/kubeconfig: {Name:mkcf76851e68b723b0046b589af4cfa7ca9a3bdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:59:31.428964 2803030 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1002 20:59:31.429084 2803030 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 20:59:31.429314 2803030 config.go:182] Loaded profile config "dockerenv-775346": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 20:59:31.429343 2803030 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 20:59:31.429401 2803030 addons.go:69] Setting storage-provisioner=true in profile "dockerenv-775346"
	I1002 20:59:31.429413 2803030 addons.go:238] Setting addon storage-provisioner=true in "dockerenv-775346"
	I1002 20:59:31.429433 2803030 host.go:66] Checking if "dockerenv-775346" exists ...
	I1002 20:59:31.429947 2803030 cli_runner.go:164] Run: docker container inspect dockerenv-775346 --format={{.State.Status}}
	I1002 20:59:31.430220 2803030 addons.go:69] Setting default-storageclass=true in profile "dockerenv-775346"
	I1002 20:59:31.430243 2803030 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "dockerenv-775346"
	I1002 20:59:31.430526 2803030 cli_runner.go:164] Run: docker container inspect dockerenv-775346 --format={{.State.Status}}
	I1002 20:59:31.434700 2803030 out.go:179] * Verifying Kubernetes components...
	I1002 20:59:31.437985 2803030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:59:31.477569 2803030 addons.go:238] Setting addon default-storageclass=true in "dockerenv-775346"
	I1002 20:59:31.477600 2803030 host.go:66] Checking if "dockerenv-775346" exists ...
	I1002 20:59:31.478053 2803030 cli_runner.go:164] Run: docker container inspect dockerenv-775346 --format={{.State.Status}}
	I1002 20:59:31.479510 2803030 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:59:31.482387 2803030 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:59:31.482397 2803030 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:59:31.482466 2803030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-775346
	I1002 20:59:31.502300 2803030 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:59:31.502313 2803030 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:59:31.502379 2803030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-775346
	I1002 20:59:31.536824 2803030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36117 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/dockerenv-775346/id_rsa Username:docker}
	I1002 20:59:31.549570 2803030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36117 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/dockerenv-775346/id_rsa Username:docker}
	I1002 20:59:31.742953 2803030 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:59:31.742981 2803030 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 20:59:31.809942 2803030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:59:31.846392 2803030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:59:32.125089 2803030 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1002 20:59:32.126803 2803030 api_server.go:52] waiting for apiserver process to appear ...
	I1002 20:59:32.126849 2803030 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:59:32.347341 2803030 api_server.go:72] duration metric: took 918.35105ms to wait for apiserver process to appear ...
	I1002 20:59:32.347356 2803030 api_server.go:88] waiting for apiserver healthz status ...
	I1002 20:59:32.347373 2803030 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1002 20:59:32.350371 2803030 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1002 20:59:32.354033 2803030 addons.go:514] duration metric: took 924.666456ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1002 20:59:32.359337 2803030 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1002 20:59:32.361453 2803030 api_server.go:141] control plane version: v1.34.1
	I1002 20:59:32.361475 2803030 api_server.go:131] duration metric: took 14.108599ms to wait for apiserver health ...
	I1002 20:59:32.361482 2803030 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 20:59:32.364202 2803030 system_pods.go:59] 5 kube-system pods found
	I1002 20:59:32.364223 2803030 system_pods.go:61] "etcd-dockerenv-775346" [07ec3103-288b-4541-9903-1b9dd312f03c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 20:59:32.364232 2803030 system_pods.go:61] "kube-apiserver-dockerenv-775346" [13a48838-39ec-48aa-a374-0fc832283591] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 20:59:32.364241 2803030 system_pods.go:61] "kube-controller-manager-dockerenv-775346" [e8ac73ad-99fb-4e07-a59a-bfec06860633] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 20:59:32.364248 2803030 system_pods.go:61] "kube-scheduler-dockerenv-775346" [db2548ab-6e65-4cb0-9a3a-19f7394ad0dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 20:59:32.364253 2803030 system_pods.go:61] "storage-provisioner" [e4443f71-fb3d-4898-964a-d5fc6ec97c63] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 20:59:32.364257 2803030 system_pods.go:74] duration metric: took 2.771403ms to wait for pod list to return data ...
	I1002 20:59:32.364267 2803030 kubeadm.go:586] duration metric: took 935.28309ms to wait for: map[apiserver:true system_pods:true]
	I1002 20:59:32.364278 2803030 node_conditions.go:102] verifying NodePressure condition ...
	I1002 20:59:32.366930 2803030 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 20:59:32.366949 2803030 node_conditions.go:123] node cpu capacity is 2
	I1002 20:59:32.366960 2803030 node_conditions.go:105] duration metric: took 2.678782ms to run NodePressure ...
	I1002 20:59:32.366984 2803030 start.go:241] waiting for startup goroutines ...
	I1002 20:59:32.629510 2803030 kapi.go:214] "coredns" deployment in "kube-system" namespace and "dockerenv-775346" context rescaled to 1 replicas
	I1002 20:59:32.629538 2803030 start.go:246] waiting for cluster config update ...
	I1002 20:59:32.629549 2803030 start.go:255] writing updated cluster config ...
	I1002 20:59:32.629843 2803030 ssh_runner.go:195] Run: rm -f paused
	I1002 20:59:32.686217 2803030 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 20:59:32.689614 2803030 out.go:179] * Done! kubectl is now configured to use "dockerenv-775346" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
	e9a32d4528330       b1a8c6f707935       10 seconds ago      Running             kindnet-cni               0                   db9341869a0c2       kindnet-th5cx                              kube-system
	87c5429d2fc8c       05baa95f5142d       11 seconds ago      Running             kube-proxy                0                   f407d899871bb       kube-proxy-x2btr                           kube-system
	48296eb5b38fb       b5f57ec6b9867       23 seconds ago      Running             kube-scheduler            0                   1c10caff54b0f       kube-scheduler-dockerenv-775346            kube-system
	c87a17493be59       7eb2c6ff0c5a7       23 seconds ago      Running             kube-controller-manager   0                   d0bfbff314a60       kube-controller-manager-dockerenv-775346   kube-system
	5fb27d2868430       43911e833d64d       23 seconds ago      Running             kube-apiserver            0                   3c6c117d46a86       kube-apiserver-dockerenv-775346            kube-system
	8f0f173c3c0b1       a1894772a478e       23 seconds ago      Running             etcd                      0                   81dd7a872e437       etcd-dockerenv-775346                      kube-system
	
	
	==> containerd <==
	Oct 02 20:59:23 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:23.566281821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-dockerenv-775346,Uid:1d05ec45a3a2d828892c1421eb6b78da,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0bfbff314a601df9a07287dcee6c82ecc775f43251b07ee9703e489e75348fa\""
	Oct 02 20:59:23 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:23.571995593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-dockerenv-775346,Uid:6496bc09232df9221ccdec1baf7dafb2,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c10caff54b0f1e858f61e7c15a2a5ffe9a2fef8f07b1d4c036c2fd3fed065fb\""
	Oct 02 20:59:23 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:23.574030762Z" level=info msg="CreateContainer within sandbox \"d0bfbff314a601df9a07287dcee6c82ecc775f43251b07ee9703e489e75348fa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
	Oct 02 20:59:23 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:23.578761035Z" level=info msg="CreateContainer within sandbox \"1c10caff54b0f1e858f61e7c15a2a5ffe9a2fef8f07b1d4c036c2fd3fed065fb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
	Oct 02 20:59:23 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:23.607946702Z" level=info msg="CreateContainer within sandbox \"d0bfbff314a601df9a07287dcee6c82ecc775f43251b07ee9703e489e75348fa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c87a17493be5940997552f9e598dd3a1a99851d77385206825bccfc423a4e97e\""
	Oct 02 20:59:23 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:23.608137204Z" level=info msg="StartContainer for \"8f0f173c3c0b15977361c646e8f1ec54ebbfe51e58eaa06b948b5613c3ef1870\" returns successfully"
	Oct 02 20:59:23 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:23.609094406Z" level=info msg="StartContainer for \"c87a17493be5940997552f9e598dd3a1a99851d77385206825bccfc423a4e97e\""
	Oct 02 20:59:23 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:23.632745381Z" level=info msg="CreateContainer within sandbox \"1c10caff54b0f1e858f61e7c15a2a5ffe9a2fef8f07b1d4c036c2fd3fed065fb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"48296eb5b38fbfa582596583d745214e9734a1b113e60bde5b5377e4418aaafe\""
	Oct 02 20:59:23 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:23.633444560Z" level=info msg="StartContainer for \"48296eb5b38fbfa582596583d745214e9734a1b113e60bde5b5377e4418aaafe\""
	Oct 02 20:59:23 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:23.667572114Z" level=info msg="StartContainer for \"5fb27d28684302e9bdd3e507c5571ab54f0ca6d2eafc095348f4186270ef6dd0\" returns successfully"
	Oct 02 20:59:23 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:23.741254801Z" level=info msg="StartContainer for \"48296eb5b38fbfa582596583d745214e9734a1b113e60bde5b5377e4418aaafe\" returns successfully"
	Oct 02 20:59:23 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:23.791484589Z" level=info msg="StartContainer for \"c87a17493be5940997552f9e598dd3a1a99851d77385206825bccfc423a4e97e\" returns successfully"
	Oct 02 20:59:34 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:34.630386643Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Oct 02 20:59:35 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:35.955950140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-th5cx,Uid:87861e3b-1048-4406-9d4f-7b1278cfbed8,Namespace:kube-system,Attempt:0,}"
	Oct 02 20:59:35 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:35.982175132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x2btr,Uid:5eb2eb7d-c09d-45eb-a9c1-09d381e8e7c9,Namespace:kube-system,Attempt:0,}"
	Oct 02 20:59:36 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:36.061423714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x2btr,Uid:5eb2eb7d-c09d-45eb-a9c1-09d381e8e7c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"f407d899871bbee38d539a0bbeba66b08a42ef6a647911bf50d4dd09dc298a9f\""
	Oct 02 20:59:36 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:36.073894255Z" level=info msg="CreateContainer within sandbox \"f407d899871bbee38d539a0bbeba66b08a42ef6a647911bf50d4dd09dc298a9f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Oct 02 20:59:36 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:36.093271219Z" level=info msg="CreateContainer within sandbox \"f407d899871bbee38d539a0bbeba66b08a42ef6a647911bf50d4dd09dc298a9f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"87c5429d2fc8c3ccf54a6a8915c0a9c0b9c5239ca9ceaf19028b770515a2dc02\""
	Oct 02 20:59:36 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:36.096393981Z" level=info msg="StartContainer for \"87c5429d2fc8c3ccf54a6a8915c0a9c0b9c5239ca9ceaf19028b770515a2dc02\""
	Oct 02 20:59:36 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:36.118149948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-th5cx,Uid:87861e3b-1048-4406-9d4f-7b1278cfbed8,Namespace:kube-system,Attempt:0,} returns sandbox id \"db9341869a0c2fa62347e8966d8c3b4b08fdc2cd35ba2976aa9042dc7195fcfa\""
	Oct 02 20:59:36 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:36.130209922Z" level=info msg="CreateContainer within sandbox \"db9341869a0c2fa62347e8966d8c3b4b08fdc2cd35ba2976aa9042dc7195fcfa\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Oct 02 20:59:36 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:36.199434504Z" level=info msg="CreateContainer within sandbox \"db9341869a0c2fa62347e8966d8c3b4b08fdc2cd35ba2976aa9042dc7195fcfa\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"e9a32d452833036c376ed3c93cea6fcec3b9df10205045f693b010fd16ff833c\""
	Oct 02 20:59:36 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:36.220131270Z" level=info msg="StartContainer for \"e9a32d452833036c376ed3c93cea6fcec3b9df10205045f693b010fd16ff833c\""
	Oct 02 20:59:36 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:36.284092754Z" level=info msg="StartContainer for \"87c5429d2fc8c3ccf54a6a8915c0a9c0b9c5239ca9ceaf19028b770515a2dc02\" returns successfully"
	Oct 02 20:59:36 dockerenv-775346 containerd[752]: time="2025-10-02T20:59:36.318318746Z" level=info msg="StartContainer for \"e9a32d452833036c376ed3c93cea6fcec3b9df10205045f693b010fd16ff833c\" returns successfully"
	
	
	==> describe nodes <==
	Name:               dockerenv-775346
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=dockerenv-775346
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=dockerenv-775346
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T20_59_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 20:59:27 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  dockerenv-775346
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 20:59:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 20:59:30 +0000   Thu, 02 Oct 2025 20:59:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 20:59:30 +0000   Thu, 02 Oct 2025 20:59:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 20:59:30 +0000   Thu, 02 Oct 2025 20:59:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 02 Oct 2025 20:59:30 +0000   Thu, 02 Oct 2025 20:59:24 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    dockerenv-775346
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2ed34a5d5acc4537a40f0df0203022d2
	  System UUID:                c32f97eb-1f2d-4768-a3c0-484f67964f60
	  Boot ID:                    ddea27b5-1bb4-4ff4-b6ce-678e2308ca3c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-dockerenv-775346                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17s
	  kube-system                 kindnet-th5cx                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12s
	  kube-system                 kube-apiserver-dockerenv-775346             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 kube-controller-manager-dockerenv-775346    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 kube-proxy-x2btr                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 kube-scheduler-dockerenv-775346             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10s                kube-proxy       
	  Normal   NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 25s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 25s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  24s (x8 over 25s)  kubelet          Node dockerenv-775346 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     24s (x7 over 25s)  kubelet          Node dockerenv-775346 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    24s (x8 over 25s)  kubelet          Node dockerenv-775346 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 17s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 17s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  17s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  17s                kubelet          Node dockerenv-775346 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17s                kubelet          Node dockerenv-775346 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17s                kubelet          Node dockerenv-775346 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13s                node-controller  Node dockerenv-775346 event: Registered Node dockerenv-775346 in Controller
	
	
	==> dmesg <==
	[Oct 2 20:00] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
	[Oct 2 20:51] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [8f0f173c3c0b15977361c646e8f1ec54ebbfe51e58eaa06b948b5613c3ef1870] <==
	{"level":"warn","ts":"2025-10-02T20:59:25.975891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:59:25.996239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:59:26.010290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:59:26.036021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:59:26.052094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:59:26.071829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:59:26.086936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:59:26.114570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:59:26.127387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:59:26.145248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:59:26.163877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:59:26.179323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:59:26.196430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:59:26.215114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:59:26.232874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:59:26.251741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:59:26.274562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:59:26.313201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:59:26.315898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:59:26.329129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:59:26.351911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:59:26.387471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:59:26.416016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:59:26.442040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:59:26.594230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60012","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:59:47 up 16:42,  0 user,  load average: 1.54, 2.33, 3.82
	Linux dockerenv-775346 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e9a32d452833036c376ed3c93cea6fcec3b9df10205045f693b010fd16ff833c] <==
	I1002 20:59:36.492901       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 20:59:36.493329       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1002 20:59:36.493557       1 main.go:148] setting mtu 1500 for CNI 
	I1002 20:59:36.493660       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 20:59:36.493761       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T20:59:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 20:59:36.692507       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 20:59:36.692590       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 20:59:36.692622       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 20:59:36.692927       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [5fb27d28684302e9bdd3e507c5571ab54f0ca6d2eafc095348f4186270ef6dd0] <==
	E1002 20:59:27.745024       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1002 20:59:27.796240       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 20:59:27.813987       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 20:59:27.814419       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1002 20:59:27.824723       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1002 20:59:27.842773       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 20:59:27.849308       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 20:59:27.931808       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 20:59:28.395126       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1002 20:59:28.400379       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1002 20:59:28.400402       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 20:59:29.153335       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 20:59:29.211663       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 20:59:29.305957       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1002 20:59:29.313126       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1002 20:59:29.314298       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 20:59:29.319493       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 20:59:29.563640       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 20:59:30.301854       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 20:59:30.320794       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1002 20:59:30.333587       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 20:59:35.016841       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1002 20:59:35.312352       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 20:59:35.319361       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 20:59:35.361173       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [c87a17493be5940997552f9e598dd3a1a99851d77385206825bccfc423a4e97e] <==
	I1002 20:59:34.604102       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 20:59:34.604114       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 20:59:34.604356       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 20:59:34.604366       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1002 20:59:34.604522       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 20:59:34.604859       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="dockerenv-775346"
	I1002 20:59:34.605068       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1002 20:59:34.604672       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 20:59:34.605658       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 20:59:34.605830       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 20:59:34.607355       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 20:59:34.609225       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 20:59:34.609443       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 20:59:34.609538       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 20:59:34.610387       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 20:59:34.612406       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 20:59:34.612583       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 20:59:34.612737       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 20:59:34.612901       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 20:59:34.613035       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 20:59:34.613146       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 20:59:34.614922       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 20:59:34.621006       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 20:59:34.623314       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 20:59:34.624137       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="dockerenv-775346" podCIDRs=["10.244.0.0/24"]
	
	
	==> kube-proxy [87c5429d2fc8c3ccf54a6a8915c0a9c0b9c5239ca9ceaf19028b770515a2dc02] <==
	I1002 20:59:36.344190       1 server_linux.go:53] "Using iptables proxy"
	I1002 20:59:36.452973       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 20:59:36.560385       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 20:59:36.560603       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 20:59:36.560713       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 20:59:36.579743       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 20:59:36.579982       1 server_linux.go:132] "Using iptables Proxier"
	I1002 20:59:36.586002       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 20:59:36.586509       1 server.go:527] "Version info" version="v1.34.1"
	I1002 20:59:36.586805       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:59:36.589599       1 config.go:200] "Starting service config controller"
	I1002 20:59:36.589933       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 20:59:36.590072       1 config.go:106] "Starting endpoint slice config controller"
	I1002 20:59:36.590156       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 20:59:36.590312       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 20:59:36.591094       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 20:59:36.595634       1 config.go:309] "Starting node config controller"
	I1002 20:59:36.595802       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 20:59:36.595883       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 20:59:36.690624       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 20:59:36.690828       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 20:59:36.691226       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [48296eb5b38fbfa582596583d745214e9734a1b113e60bde5b5377e4418aaafe] <==
	I1002 20:59:28.174677       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:59:28.178815       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:59:28.178856       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:59:28.179834       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 20:59:28.180222       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1002 20:59:28.188995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 20:59:28.193355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 20:59:28.193510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 20:59:28.193645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 20:59:28.193685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 20:59:28.193723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 20:59:28.195660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 20:59:28.195734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 20:59:28.196012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 20:59:28.196131       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 20:59:28.196793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 20:59:28.201368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 20:59:28.201589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 20:59:28.201764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 20:59:28.201935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 20:59:28.202829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 20:59:28.203490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 20:59:28.203580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 20:59:28.203729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1002 20:59:29.379249       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 20:59:31 dockerenv-775346 kubelet[1456]: I1002 20:59:31.381019    1456 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-dockerenv-775346"
	Oct 02 20:59:31 dockerenv-775346 kubelet[1456]: E1002 20:59:31.399512    1456 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-dockerenv-775346\" already exists" pod="kube-system/kube-apiserver-dockerenv-775346"
	Oct 02 20:59:31 dockerenv-775346 kubelet[1456]: I1002 20:59:31.412370    1456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-dockerenv-775346" podStartSLOduration=1.412352563 podStartE2EDuration="1.412352563s" podCreationTimestamp="2025-10-02 20:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 20:59:31.412159952 +0000 UTC m=+1.270369068" watchObservedRunningTime="2025-10-02 20:59:31.412352563 +0000 UTC m=+1.270561687"
	Oct 02 20:59:31 dockerenv-775346 kubelet[1456]: I1002 20:59:31.468909    1456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-dockerenv-775346" podStartSLOduration=1.468890142 podStartE2EDuration="1.468890142s" podCreationTimestamp="2025-10-02 20:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 20:59:31.435466117 +0000 UTC m=+1.293675241" watchObservedRunningTime="2025-10-02 20:59:31.468890142 +0000 UTC m=+1.327099258"
	Oct 02 20:59:31 dockerenv-775346 kubelet[1456]: I1002 20:59:31.528045    1456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-dockerenv-775346" podStartSLOduration=1.5280260669999999 podStartE2EDuration="1.528026067s" podCreationTimestamp="2025-10-02 20:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 20:59:31.482620257 +0000 UTC m=+1.340829373" watchObservedRunningTime="2025-10-02 20:59:31.528026067 +0000 UTC m=+1.386235192"
	Oct 02 20:59:31 dockerenv-775346 kubelet[1456]: I1002 20:59:31.529773    1456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-dockerenv-775346" podStartSLOduration=1.5297338219999999 podStartE2EDuration="1.529733822s" podCreationTimestamp="2025-10-02 20:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 20:59:31.527982982 +0000 UTC m=+1.386192106" watchObservedRunningTime="2025-10-02 20:59:31.529733822 +0000 UTC m=+1.387942995"
	Oct 02 20:59:34 dockerenv-775346 kubelet[1456]: I1002 20:59:34.630015    1456 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 02 20:59:34 dockerenv-775346 kubelet[1456]: I1002 20:59:34.630616    1456 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 02 20:59:35 dockerenv-775346 kubelet[1456]: I1002 20:59:35.109730    1456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcsrl\" (UniqueName: \"kubernetes.io/projected/87861e3b-1048-4406-9d4f-7b1278cfbed8-kube-api-access-tcsrl\") pod \"kindnet-th5cx\" (UID: \"87861e3b-1048-4406-9d4f-7b1278cfbed8\") " pod="kube-system/kindnet-th5cx"
	Oct 02 20:59:35 dockerenv-775346 kubelet[1456]: I1002 20:59:35.109786    1456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5eb2eb7d-c09d-45eb-a9c1-09d381e8e7c9-lib-modules\") pod \"kube-proxy-x2btr\" (UID: \"5eb2eb7d-c09d-45eb-a9c1-09d381e8e7c9\") " pod="kube-system/kube-proxy-x2btr"
	Oct 02 20:59:35 dockerenv-775346 kubelet[1456]: I1002 20:59:35.109815    1456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/87861e3b-1048-4406-9d4f-7b1278cfbed8-cni-cfg\") pod \"kindnet-th5cx\" (UID: \"87861e3b-1048-4406-9d4f-7b1278cfbed8\") " pod="kube-system/kindnet-th5cx"
	Oct 02 20:59:35 dockerenv-775346 kubelet[1456]: I1002 20:59:35.109833    1456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87861e3b-1048-4406-9d4f-7b1278cfbed8-xtables-lock\") pod \"kindnet-th5cx\" (UID: \"87861e3b-1048-4406-9d4f-7b1278cfbed8\") " pod="kube-system/kindnet-th5cx"
	Oct 02 20:59:35 dockerenv-775346 kubelet[1456]: I1002 20:59:35.109850    1456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87861e3b-1048-4406-9d4f-7b1278cfbed8-lib-modules\") pod \"kindnet-th5cx\" (UID: \"87861e3b-1048-4406-9d4f-7b1278cfbed8\") " pod="kube-system/kindnet-th5cx"
	Oct 02 20:59:35 dockerenv-775346 kubelet[1456]: I1002 20:59:35.109870    1456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tknr\" (UniqueName: \"kubernetes.io/projected/5eb2eb7d-c09d-45eb-a9c1-09d381e8e7c9-kube-api-access-5tknr\") pod \"kube-proxy-x2btr\" (UID: \"5eb2eb7d-c09d-45eb-a9c1-09d381e8e7c9\") " pod="kube-system/kube-proxy-x2btr"
	Oct 02 20:59:35 dockerenv-775346 kubelet[1456]: I1002 20:59:35.109891    1456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5eb2eb7d-c09d-45eb-a9c1-09d381e8e7c9-kube-proxy\") pod \"kube-proxy-x2btr\" (UID: \"5eb2eb7d-c09d-45eb-a9c1-09d381e8e7c9\") " pod="kube-system/kube-proxy-x2btr"
	Oct 02 20:59:35 dockerenv-775346 kubelet[1456]: I1002 20:59:35.109915    1456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5eb2eb7d-c09d-45eb-a9c1-09d381e8e7c9-xtables-lock\") pod \"kube-proxy-x2btr\" (UID: \"5eb2eb7d-c09d-45eb-a9c1-09d381e8e7c9\") " pod="kube-system/kube-proxy-x2btr"
	Oct 02 20:59:35 dockerenv-775346 kubelet[1456]: E1002 20:59:35.222573    1456 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 02 20:59:35 dockerenv-775346 kubelet[1456]: E1002 20:59:35.222612    1456 projected.go:196] Error preparing data for projected volume kube-api-access-tcsrl for pod kube-system/kindnet-th5cx: configmap "kube-root-ca.crt" not found
	Oct 02 20:59:35 dockerenv-775346 kubelet[1456]: E1002 20:59:35.222690    1456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87861e3b-1048-4406-9d4f-7b1278cfbed8-kube-api-access-tcsrl podName:87861e3b-1048-4406-9d4f-7b1278cfbed8 nodeName:}" failed. No retries permitted until 2025-10-02 20:59:35.722666171 +0000 UTC m=+5.580875287 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tcsrl" (UniqueName: "kubernetes.io/projected/87861e3b-1048-4406-9d4f-7b1278cfbed8-kube-api-access-tcsrl") pod "kindnet-th5cx" (UID: "87861e3b-1048-4406-9d4f-7b1278cfbed8") : configmap "kube-root-ca.crt" not found
	Oct 02 20:59:35 dockerenv-775346 kubelet[1456]: E1002 20:59:35.226692    1456 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 02 20:59:35 dockerenv-775346 kubelet[1456]: E1002 20:59:35.226728    1456 projected.go:196] Error preparing data for projected volume kube-api-access-5tknr for pod kube-system/kube-proxy-x2btr: configmap "kube-root-ca.crt" not found
	Oct 02 20:59:35 dockerenv-775346 kubelet[1456]: E1002 20:59:35.226792    1456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5eb2eb7d-c09d-45eb-a9c1-09d381e8e7c9-kube-api-access-5tknr podName:5eb2eb7d-c09d-45eb-a9c1-09d381e8e7c9 nodeName:}" failed. No retries permitted until 2025-10-02 20:59:35.726769725 +0000 UTC m=+5.584978849 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5tknr" (UniqueName: "kubernetes.io/projected/5eb2eb7d-c09d-45eb-a9c1-09d381e8e7c9-kube-api-access-5tknr") pod "kube-proxy-x2btr" (UID: "5eb2eb7d-c09d-45eb-a9c1-09d381e8e7c9") : configmap "kube-root-ca.crt" not found
	Oct 02 20:59:35 dockerenv-775346 kubelet[1456]: I1002 20:59:35.815702    1456 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 02 20:59:36 dockerenv-775346 kubelet[1456]: I1002 20:59:36.429801    1456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x2btr" podStartSLOduration=1.429782708 podStartE2EDuration="1.429782708s" podCreationTimestamp="2025-10-02 20:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 20:59:36.406256626 +0000 UTC m=+6.264465758" watchObservedRunningTime="2025-10-02 20:59:36.429782708 +0000 UTC m=+6.287991832"
	Oct 02 20:59:36 dockerenv-775346 kubelet[1456]: I1002 20:59:36.883185    1456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-th5cx" podStartSLOduration=1.883155544 podStartE2EDuration="1.883155544s" podCreationTimestamp="2025-10-02 20:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 20:59:36.430458666 +0000 UTC m=+6.288667790" watchObservedRunningTime="2025-10-02 20:59:36.883155544 +0000 UTC m=+6.741364684"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p dockerenv-775346 -n dockerenv-775346
helpers_test.go:269: (dbg) Run:  kubectl --context dockerenv-775346 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-rmx99 storage-provisioner
helpers_test.go:282: ======> post-mortem[TestDockerEnvContainerd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context dockerenv-775346 describe pod coredns-66bc5c9577-rmx99 storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context dockerenv-775346 describe pod coredns-66bc5c9577-rmx99 storage-provisioner: exit status 1 (85.133645ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-rmx99" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context dockerenv-775346 describe pod coredns-66bc5c9577-rmx99 storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "dockerenv-775346" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-775346
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-775346: (2.268567365s)
--- FAIL: TestDockerEnvContainerd (48.81s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-029371 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-029371 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-029371 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-029371 --alsologtostderr -v=1] stderr:
I1002 21:13:25.783732 2823265 out.go:360] Setting OutFile to fd 1 ...
I1002 21:13:25.785121 2823265 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:13:25.785141 2823265 out.go:374] Setting ErrFile to fd 2...
I1002 21:13:25.785146 2823265 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:13:25.785490 2823265 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-2783765/.minikube/bin
I1002 21:13:25.787780 2823265 mustload.go:65] Loading cluster: functional-029371
I1002 21:13:25.788302 2823265 config.go:182] Loaded profile config "functional-029371": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 21:13:25.788935 2823265 cli_runner.go:164] Run: docker container inspect functional-029371 --format={{.State.Status}}
I1002 21:13:25.822873 2823265 host.go:66] Checking if "functional-029371" exists ...
I1002 21:13:25.823183 2823265 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1002 21:13:25.929883 2823265 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 21:13:25.91982907 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1002 21:13:25.929996 2823265 api_server.go:166] Checking apiserver status ...
I1002 21:13:25.930059 2823265 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1002 21:13:25.930096 2823265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-029371
I1002 21:13:25.965234 2823265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36127 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/functional-029371/id_rsa Username:docker}
I1002 21:13:26.089268 2823265 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4728/cgroup
I1002 21:13:26.100273 2823265 api_server.go:182] apiserver freezer: "9:freezer:/docker/090c5f703e0603ef3d534b06de534b9f38e45786405a99e53ae49aef5c3508b3/kubepods/burstable/pod3593143b83847bd072d65be826f433d9/ec86407873fe8df85e4887b5c5b2b21b30f5b2fe009c3928a9a2d4b98c874b5a"
I1002 21:13:26.100347 2823265 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/090c5f703e0603ef3d534b06de534b9f38e45786405a99e53ae49aef5c3508b3/kubepods/burstable/pod3593143b83847bd072d65be826f433d9/ec86407873fe8df85e4887b5c5b2b21b30f5b2fe009c3928a9a2d4b98c874b5a/freezer.state
I1002 21:13:26.109140 2823265 api_server.go:204] freezer state: "THAWED"
I1002 21:13:26.109168 2823265 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1002 21:13:26.117843 2823265 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1002 21:13:26.117882 2823265 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1002 21:13:26.118061 2823265 config.go:182] Loaded profile config "functional-029371": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 21:13:26.118081 2823265 addons.go:69] Setting dashboard=true in profile "functional-029371"
I1002 21:13:26.118088 2823265 addons.go:238] Setting addon dashboard=true in "functional-029371"
I1002 21:13:26.118115 2823265 host.go:66] Checking if "functional-029371" exists ...
I1002 21:13:26.118544 2823265 cli_runner.go:164] Run: docker container inspect functional-029371 --format={{.State.Status}}
I1002 21:13:26.140060 2823265 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1002 21:13:26.142979 2823265 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1002 21:13:26.145972 2823265 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1002 21:13:26.146024 2823265 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1002 21:13:26.146101 2823265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-029371
I1002 21:13:26.164403 2823265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36127 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/functional-029371/id_rsa Username:docker}
I1002 21:13:26.270492 2823265 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1002 21:13:26.270517 2823265 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1002 21:13:26.285078 2823265 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1002 21:13:26.285112 2823265 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1002 21:13:26.300893 2823265 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1002 21:13:26.300916 2823265 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1002 21:13:26.314944 2823265 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1002 21:13:26.314964 2823265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1002 21:13:26.332009 2823265 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I1002 21:13:26.332029 2823265 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1002 21:13:26.348411 2823265 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1002 21:13:26.348437 2823265 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1002 21:13:26.363309 2823265 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1002 21:13:26.363330 2823265 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1002 21:13:26.377166 2823265 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1002 21:13:26.377193 2823265 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1002 21:13:26.391829 2823265 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1002 21:13:26.391851 2823265 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1002 21:13:26.407390 2823265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1002 21:13:27.341095 2823265 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-029371 addons enable metrics-server

                                                
                                                
I1002 21:13:27.344981 2823265 addons.go:201] Writing out "functional-029371" config to set dashboard=true...
W1002 21:13:27.345266 2823265 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1002 21:13:27.345938 2823265 kapi.go:59] client config for functional-029371: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/client.key", CAFile:"/home/jenkins/minikube-integration/21682-2783765/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1002 21:13:27.346498 2823265 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1002 21:13:27.346518 2823265 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1002 21:13:27.346525 2823265 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1002 21:13:27.346535 2823265 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1002 21:13:27.346539 2823265 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1002 21:13:27.364873 2823265 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  8b54a9a1-eb3b-46c2-84fc-9b6491d73f9b 1431 0 2025-10-02 21:13:27 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-10-02 21:13:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.103.12.222,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.103.12.222],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1002 21:13:27.365048 2823265 out.go:285] * Launching proxy ...
* Launching proxy ...
I1002 21:13:27.365122 2823265 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-029371 proxy --port 36195]
I1002 21:13:27.365399 2823265 dashboard.go:157] Waiting for kubectl to output host:port ...
I1002 21:13:27.429108 2823265 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W1002 21:13:27.429174 2823265 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1002 21:13:27.457667 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c6067f46-ef3f-43d4-ba93-df714cd30ca7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:13:27 GMT]] Body:0x400078f4c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004b37c0 TLS:<nil>}
I1002 21:13:27.457747 2823265 retry.go:31] will retry after 136.359µs: Temporary Error: unexpected response code: 503
I1002 21:13:27.466196 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8a21401c-49c2-43cb-8639-0960f8b5871f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:13:27 GMT]] Body:0x400078f540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004b3900 TLS:<nil>}
I1002 21:13:27.466288 2823265 retry.go:31] will retry after 222.157µs: Temporary Error: unexpected response code: 503
I1002 21:13:27.471968 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[15b140a5-e320-4fb0-9db3-9a7d5144e97c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:13:27 GMT]] Body:0x400078f5c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004b3a40 TLS:<nil>}
I1002 21:13:27.472039 2823265 retry.go:31] will retry after 302.764µs: Temporary Error: unexpected response code: 503
I1002 21:13:27.476685 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c35243af-5352-4ea3-8a07-65d02b7fbb92] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:13:27 GMT]] Body:0x400078f640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400031e140 TLS:<nil>}
I1002 21:13:27.476759 2823265 retry.go:31] will retry after 474.044µs: Temporary Error: unexpected response code: 503
I1002 21:13:27.480673 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0bcdf032-5ff2-44ed-985a-ef42ce4ee86b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:13:27 GMT]] Body:0x400078f6c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400031e3c0 TLS:<nil>}
I1002 21:13:27.480754 2823265 retry.go:31] will retry after 441.32µs: Temporary Error: unexpected response code: 503
I1002 21:13:27.484463 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cc8fb6ba-b7a3-4c2b-9933-ea4b5a0ee5f1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:13:27 GMT]] Body:0x40007ee200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400044eb40 TLS:<nil>}
I1002 21:13:27.484522 2823265 retry.go:31] will retry after 960.387µs: Temporary Error: unexpected response code: 503
I1002 21:13:27.488387 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1853d6f0-1aee-45b1-9f9a-3d3bc767315d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:13:27 GMT]] Body:0x400078f7c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400031e500 TLS:<nil>}
I1002 21:13:27.488446 2823265 retry.go:31] will retry after 1.309234ms: Temporary Error: unexpected response code: 503
I1002 21:13:27.493338 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[28bf74e3-402f-49c5-902f-0ce126b6ba83] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:13:27 GMT]] Body:0x40007ee300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400044ec80 TLS:<nil>}
I1002 21:13:27.493405 2823265 retry.go:31] will retry after 1.998939ms: Temporary Error: unexpected response code: 503
I1002 21:13:27.498536 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9325104d-7728-4dd3-92b7-d2f495b90875] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:13:27 GMT]] Body:0x40007ee380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400044edc0 TLS:<nil>}
I1002 21:13:27.498598 2823265 retry.go:31] will retry after 3.640496ms: Temporary Error: unexpected response code: 503
I1002 21:13:27.505809 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[75a7f7ba-1693-4354-b9ed-ffdc02f66f37] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:13:27 GMT]] Body:0x40007ee400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400044ef00 TLS:<nil>}
I1002 21:13:27.505872 2823265 retry.go:31] will retry after 2.765578ms: Temporary Error: unexpected response code: 503
I1002 21:13:27.512435 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cb123a8a-5c4f-487a-87a0-f3a277adb839] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:13:27 GMT]] Body:0x40007ee4c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400031e640 TLS:<nil>}
I1002 21:13:27.512512 2823265 retry.go:31] will retry after 4.581435ms: Temporary Error: unexpected response code: 503
I1002 21:13:27.520840 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[608493b2-b1cc-4d2e-ab4b-2087018b21b0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:13:27 GMT]] Body:0x40007ee540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400044f180 TLS:<nil>}
I1002 21:13:27.520908 2823265 retry.go:31] will retry after 10.987355ms: Temporary Error: unexpected response code: 503
I1002 21:13:27.541050 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b4695db8-8ef9-485b-b0ef-fd1eeeb06f0b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:13:27 GMT]] Body:0x400078fac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400044f2c0 TLS:<nil>}
I1002 21:13:27.541123 2823265 retry.go:31] will retry after 16.204014ms: Temporary Error: unexpected response code: 503
I1002 21:13:27.561726 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d119c260-3106-4527-8caf-ad767b4a977c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:13:27 GMT]] Body:0x40007ee600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400044f400 TLS:<nil>}
I1002 21:13:27.561802 2823265 retry.go:31] will retry after 14.510777ms: Temporary Error: unexpected response code: 503
I1002 21:13:27.585049 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[11e72361-7b0b-4259-8334-cf50feccab75] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:13:27 GMT]] Body:0x40007ee680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400044f540 TLS:<nil>}
I1002 21:13:27.585130 2823265 retry.go:31] will retry after 24.15577ms: Temporary Error: unexpected response code: 503
I1002 21:13:27.622553 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[420f7237-382d-4a63-bb1d-8650d6b7ff58] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:13:27 GMT]] Body:0x400078fc40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400031e780 TLS:<nil>}
I1002 21:13:27.622651 2823265 retry.go:31] will retry after 54.76571ms: Temporary Error: unexpected response code: 503
I1002 21:13:27.686727 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cb4af4fb-de4b-4fa3-8028-5cefbc64f48a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:13:27 GMT]] Body:0x40007ee780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400044f680 TLS:<nil>}
I1002 21:13:27.686790 2823265 retry.go:31] will retry after 54.947612ms: Temporary Error: unexpected response code: 503
I1002 21:13:27.747444 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[72da0b25-5d15-4a9f-b676-712ac84b69c4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:13:27 GMT]] Body:0x40007ee800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400044f7c0 TLS:<nil>}
I1002 21:13:27.747522 2823265 retry.go:31] will retry after 129.145996ms: Temporary Error: unexpected response code: 503
I1002 21:13:27.880277 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fe20eb33-c127-4d02-949f-f0c32011c080] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:13:27 GMT]] Body:0x40007ee880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400044f900 TLS:<nil>}
I1002 21:13:27.880340 2823265 retry.go:31] will retry after 102.08813ms: Temporary Error: unexpected response code: 503
I1002 21:13:27.985771 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bfa2907d-1492-47e9-ad1f-793118dfc5a1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:13:27 GMT]] Body:0x40007ee900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400044fa40 TLS:<nil>}
I1002 21:13:27.985851 2823265 retry.go:31] will retry after 284.378485ms: Temporary Error: unexpected response code: 503
I1002 21:13:28.274598 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a21f7f4c-c462-40dc-adab-4e2d5fa2c33a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:13:28 GMT]] Body:0x400078ff00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400044fb80 TLS:<nil>}
I1002 21:13:28.274661 2823265 retry.go:31] will retry after 452.162931ms: Temporary Error: unexpected response code: 503
I1002 21:13:28.730153 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b14c61a5-e2f0-4b8e-93f6-3156e8d01873] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:13:28 GMT]] Body:0x40016f0040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400044fcc0 TLS:<nil>}
I1002 21:13:28.730216 2823265 retry.go:31] will retry after 323.153713ms: Temporary Error: unexpected response code: 503
I1002 21:13:29.056713 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a17b6b60-63e2-42aa-8828-45915fdbcaef] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:13:29 GMT]] Body:0x40016f0100 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400044fe00 TLS:<nil>}
I1002 21:13:29.056831 2823265 retry.go:31] will retry after 735.042655ms: Temporary Error: unexpected response code: 503
I1002 21:13:29.795246 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e7cec1c8-f7ce-480e-ad8f-291485706874] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:13:29 GMT]] Body:0x40007eea80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000416000 TLS:<nil>}
I1002 21:13:29.795338 2823265 retry.go:31] will retry after 1.231765348s: Temporary Error: unexpected response code: 503
I1002 21:13:31.031071 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2417fdb1-1ccd-45af-9264-e8af9a902315] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:13:31 GMT]] Body:0x40007eeb00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000416140 TLS:<nil>}
I1002 21:13:31.031133 2823265 retry.go:31] will retry after 2.246549654s: Temporary Error: unexpected response code: 503
I1002 21:13:33.282458 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[72633e41-7155-436c-8efb-baa6c798c4c9] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:13:33 GMT]] Body:0x40007eeb80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000416500 TLS:<nil>}
I1002 21:13:33.282525 2823265 retry.go:31] will retry after 2.275880844s: Temporary Error: unexpected response code: 503
I1002 21:13:35.561656 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5be60998-611e-4fc8-91a1-972d6668f705] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:13:35 GMT]] Body:0x40007eec00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000416640 TLS:<nil>}
I1002 21:13:35.561717 2823265 retry.go:31] will retry after 3.778764847s: Temporary Error: unexpected response code: 503
I1002 21:13:39.343338 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[85878125-6ae5-49b0-9ec5-c4f6a61b8d05] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:13:39 GMT]] Body:0x40016f0340 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400031e8c0 TLS:<nil>}
I1002 21:13:39.343406 2823265 retry.go:31] will retry after 5.796702276s: Temporary Error: unexpected response code: 503
I1002 21:13:45.145895 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a8eee95f-b999-42e4-b417-3bf2295ae5de] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:13:45 GMT]] Body:0x40016f0400 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000416780 TLS:<nil>}
I1002 21:13:45.145969 2823265 retry.go:31] will retry after 4.496803182s: Temporary Error: unexpected response code: 503
I1002 21:13:49.647550 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[88473cfb-ae2b-4abe-91c9-3bd048a0be2b] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:13:49 GMT]] Body:0x400178c080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400031ea00 TLS:<nil>}
I1002 21:13:49.647610 2823265 retry.go:31] will retry after 14.735194038s: Temporary Error: unexpected response code: 503
I1002 21:14:04.389458 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[28ce626d-7997-4677-8dbf-7932de0b4238] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:14:04 GMT]] Body:0x400178c100 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004168c0 TLS:<nil>}
I1002 21:14:04.389517 2823265 retry.go:31] will retry after 23.252607587s: Temporary Error: unexpected response code: 503
I1002 21:14:27.645534 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0e1ee717-bc7d-447a-ae0f-da44b93ad5e3] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:14:27 GMT]] Body:0x400178c1c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000416a00 TLS:<nil>}
I1002 21:14:27.645594 2823265 retry.go:31] will retry after 30.148339871s: Temporary Error: unexpected response code: 503
I1002 21:14:57.796932 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e9628f04-ad4c-40c1-bf4b-b9a5a62050b4] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:14:57 GMT]] Body:0x40016f0600 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000416b40 TLS:<nil>}
I1002 21:14:57.796992 2823265 retry.go:31] will retry after 1m3.583383567s: Temporary Error: unexpected response code: 503
I1002 21:16:01.384551 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8eda903c-5d39-4966-b506-745badd62b92] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:16:01 GMT]] Body:0x400178c080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400031eb40 TLS:<nil>}
I1002 21:16:01.384619 2823265 retry.go:31] will retry after 36.521882651s: Temporary Error: unexpected response code: 503
I1002 21:16:37.909700 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[de1816d5-08ef-4d02-9b15-76fcbf899fe9] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:16:37 GMT]] Body:0x400178c180 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400031ec80 TLS:<nil>}
I1002 21:16:37.909760 2823265 retry.go:31] will retry after 53.516134377s: Temporary Error: unexpected response code: 503
I1002 21:17:31.429054 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dc47e198-400f-4f49-8c88-1a67899c8fd7] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:17:31 GMT]] Body:0x40016f0100 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400031ef00 TLS:<nil>}
I1002 21:17:31.429129 2823265 retry.go:31] will retry after 38.869099417s: Temporary Error: unexpected response code: 503
I1002 21:18:10.305850 2823265 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[51ee3c3c-2951-43e8-a4c3-c77cfa15642c] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 21:18:10 GMT]] Body:0x40016f0280 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000416c80 TLS:<nil>}
I1002 21:18:10.305926 2823265 retry.go:31] will retry after 35.152307887s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-029371
helpers_test.go:243: (dbg) docker inspect functional-029371:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "090c5f703e0603ef3d534b06de534b9f38e45786405a99e53ae49aef5c3508b3",
	        "Created": "2025-10-02T21:00:51.978972474Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2811196,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:00:52.062744723Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/090c5f703e0603ef3d534b06de534b9f38e45786405a99e53ae49aef5c3508b3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/090c5f703e0603ef3d534b06de534b9f38e45786405a99e53ae49aef5c3508b3/hostname",
	        "HostsPath": "/var/lib/docker/containers/090c5f703e0603ef3d534b06de534b9f38e45786405a99e53ae49aef5c3508b3/hosts",
	        "LogPath": "/var/lib/docker/containers/090c5f703e0603ef3d534b06de534b9f38e45786405a99e53ae49aef5c3508b3/090c5f703e0603ef3d534b06de534b9f38e45786405a99e53ae49aef5c3508b3-json.log",
	        "Name": "/functional-029371",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-029371:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-029371",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "090c5f703e0603ef3d534b06de534b9f38e45786405a99e53ae49aef5c3508b3",
	                "LowerDir": "/var/lib/docker/overlay2/caf7df263035e1f28a1da9be1443cbf5d19bd61f80924c026053c54e47c04e30-init/diff:/var/lib/docker/overlay2/51331203fb22f22857c79ac4aca1f3d12d523fa3ef805f7f258c2d1849e728ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/caf7df263035e1f28a1da9be1443cbf5d19bd61f80924c026053c54e47c04e30/merged",
	                "UpperDir": "/var/lib/docker/overlay2/caf7df263035e1f28a1da9be1443cbf5d19bd61f80924c026053c54e47c04e30/diff",
	                "WorkDir": "/var/lib/docker/overlay2/caf7df263035e1f28a1da9be1443cbf5d19bd61f80924c026053c54e47c04e30/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-029371",
	                "Source": "/var/lib/docker/volumes/functional-029371/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-029371",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-029371",
	                "name.minikube.sigs.k8s.io": "functional-029371",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1fd369d7c675f494df1af8bbeb228ab303420ec6e440618440a08cd22840ddd9",
	            "SandboxKey": "/var/run/docker/netns/1fd369d7c675",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36127"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36128"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36131"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36129"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36130"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-029371": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:8a:b4:10:41:3b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "00e3563aa4808dcd5f3a224a2151deb754278db778c1a4a02e08e667b6d2949c",
	                    "EndpointID": "5ce2c0a1f336f8f0a42c5f4a14f366cc54ee230716ae07896a98b853c1146cb5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-029371",
	                        "090c5f703e06"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-029371 -n functional-029371
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-029371 logs -n 25: (1.459233347s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-029371 image load --daemon kicbase/echo-server:functional-029371 --alsologtostderr                                                                   │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ image          │ functional-029371 image ls                                                                                                                                      │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ image          │ functional-029371 image save kicbase/echo-server:functional-029371 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ image          │ functional-029371 image rm kicbase/echo-server:functional-029371 --alsologtostderr                                                                              │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ image          │ functional-029371 image ls                                                                                                                                      │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ image          │ functional-029371 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ image          │ functional-029371 image ls                                                                                                                                      │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ image          │ functional-029371 image save --daemon kicbase/echo-server:functional-029371 --alsologtostderr                                                                   │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ ssh            │ functional-029371 ssh sudo cat /etc/test/nested/copy/2785630/hosts                                                                                              │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ ssh            │ functional-029371 ssh sudo cat /etc/ssl/certs/2785630.pem                                                                                                       │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ ssh            │ functional-029371 ssh sudo cat /usr/share/ca-certificates/2785630.pem                                                                                           │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ ssh            │ functional-029371 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ ssh            │ functional-029371 ssh sudo cat /etc/ssl/certs/27856302.pem                                                                                                      │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ ssh            │ functional-029371 ssh sudo cat /usr/share/ca-certificates/27856302.pem                                                                                          │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ ssh            │ functional-029371 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ image          │ functional-029371 image ls --format short --alsologtostderr                                                                                                     │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ image          │ functional-029371 image ls --format yaml --alsologtostderr                                                                                                      │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ ssh            │ functional-029371 ssh pgrep buildkitd                                                                                                                           │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │                     │
	│ image          │ functional-029371 image build -t localhost/my-image:functional-029371 testdata/build --alsologtostderr                                                          │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ image          │ functional-029371 image ls                                                                                                                                      │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ image          │ functional-029371 image ls --format json --alsologtostderr                                                                                                      │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ image          │ functional-029371 image ls --format table --alsologtostderr                                                                                                     │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ update-context │ functional-029371 update-context --alsologtostderr -v=2                                                                                                         │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ update-context │ functional-029371 update-context --alsologtostderr -v=2                                                                                                         │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ update-context │ functional-029371 update-context --alsologtostderr -v=2                                                                                                         │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:13:25
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:13:25.421148 2823136 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:13:25.421376 2823136 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:13:25.421387 2823136 out.go:374] Setting ErrFile to fd 2...
	I1002 21:13:25.421393 2823136 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:13:25.421732 2823136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-2783765/.minikube/bin
	I1002 21:13:25.422226 2823136 out.go:368] Setting JSON to false
	I1002 21:13:25.424330 2823136 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":60955,"bootTime":1759378651,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1002 21:13:25.424411 2823136 start.go:140] virtualization:  
	I1002 21:13:25.427683 2823136 out.go:179] * [functional-029371] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:13:25.432394 2823136 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:13:25.433112 2823136 notify.go:220] Checking for updates...
	I1002 21:13:25.438147 2823136 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:13:25.440960 2823136 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-2783765/kubeconfig
	I1002 21:13:25.443750 2823136 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-2783765/.minikube
	I1002 21:13:25.446592 2823136 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:13:25.449386 2823136 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:13:25.452681 2823136 config.go:182] Loaded profile config "functional-029371": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 21:13:25.453344 2823136 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:13:25.492345 2823136 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:13:25.492466 2823136 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:13:25.607116 2823136 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 21:13:25.597081351 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:13:25.607226 2823136 docker.go:318] overlay module found
	I1002 21:13:25.610544 2823136 out.go:179] * Using the docker driver based on existing profile
	I1002 21:13:25.613362 2823136 start.go:304] selected driver: docker
	I1002 21:13:25.613384 2823136 start.go:924] validating driver "docker" against &{Name:functional-029371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-029371 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:13:25.613487 2823136 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:13:25.613591 2823136 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:13:25.713438 2823136 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 21:13:25.704253073 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:13:25.713872 2823136 cni.go:84] Creating CNI manager for ""
	I1002 21:13:25.713943 2823136 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 21:13:25.713991 2823136 start.go:348] cluster config:
	{Name:functional-029371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-029371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:13:25.717119 2823136 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	62517abd539ac       1611cd07b61d5       5 minutes ago       Exited              mount-munger              0                   eb64f4b609a6b       busybox-mount                               default
	1a697c0c38a23       ce2d2cda2d858       5 minutes ago       Running             echo-server               0                   1b61ad624188c       hello-node-75c85bcc94-jvqz4                 default
	f7ec92ef7ee86       35f3cbee4fb77       15 minutes ago      Running             nginx                     0                   d2770ddcd54ff       nginx-svc                                   default
	e9301c91add10       ba04bb24b9575       15 minutes ago      Running             storage-provisioner       2                   d016164eeb92f       storage-provisioner                         kube-system
	4a78f66b8de9a       7eb2c6ff0c5a7       15 minutes ago      Running             kube-controller-manager   2                   f8f65514862b2       kube-controller-manager-functional-029371   kube-system
	ec86407873fe8       43911e833d64d       15 minutes ago      Running             kube-apiserver            0                   4bff9fa30870b       kube-apiserver-functional-029371            kube-system
	0dd8df4eab17a       b5f57ec6b9867       15 minutes ago      Running             kube-scheduler            1                   7bbfe7c234b3a       kube-scheduler-functional-029371            kube-system
	ff6176ec7ae2d       a1894772a478e       15 minutes ago      Running             etcd                      1                   f385ef2d71fec       etcd-functional-029371                      kube-system
	9363aff35a4ac       7eb2c6ff0c5a7       15 minutes ago      Exited              kube-controller-manager   1                   f8f65514862b2       kube-controller-manager-functional-029371   kube-system
	bb62981a90b2e       05baa95f5142d       15 minutes ago      Running             kube-proxy                1                   095dc989df9d3       kube-proxy-xd2gs                            kube-system
	9f4fa4e6cafcd       ba04bb24b9575       15 minutes ago      Exited              storage-provisioner       1                   d016164eeb92f       storage-provisioner                         kube-system
	e13a9218fb36c       138784d87c9c5       15 minutes ago      Running             coredns                   1                   28a525d91513d       coredns-66bc5c9577-bswh9                    kube-system
	c0544bb436a09       b1a8c6f707935       15 minutes ago      Running             kindnet-cni               1                   ebe0641167404       kindnet-9zmhd                               kube-system
	6e626b9db7e71       138784d87c9c5       16 minutes ago      Exited              coredns                   0                   28a525d91513d       coredns-66bc5c9577-bswh9                    kube-system
	fa91f8ea7d10f       b1a8c6f707935       17 minutes ago      Exited              kindnet-cni               0                   ebe0641167404       kindnet-9zmhd                               kube-system
	71353644d4012       05baa95f5142d       17 minutes ago      Exited              kube-proxy                0                   095dc989df9d3       kube-proxy-xd2gs                            kube-system
	97c3f3f108740       a1894772a478e       17 minutes ago      Exited              etcd                      0                   f385ef2d71fec       etcd-functional-029371                      kube-system
	37a0176519c77       b5f57ec6b9867       17 minutes ago      Exited              kube-scheduler            0                   7bbfe7c234b3a       kube-scheduler-functional-029371            kube-system
	
	
	==> containerd <==
	Oct 02 21:14:11 functional-029371 containerd[3583]: time="2025-10-02T21:14:11.671499332Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 02 21:14:11 functional-029371 containerd[3583]: time="2025-10-02T21:14:11.674460964Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 21:14:11 functional-029371 containerd[3583]: time="2025-10-02T21:14:11.817691395Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 21:14:12 functional-029371 containerd[3583]: time="2025-10-02T21:14:12.090551749Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 21:14:12 functional-029371 containerd[3583]: time="2025-10-02T21:14:12.090602212Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	Oct 02 21:15:01 functional-029371 containerd[3583]: time="2025-10-02T21:15:01.670361777Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 02 21:15:01 functional-029371 containerd[3583]: time="2025-10-02T21:15:01.672933281Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 21:15:01 functional-029371 containerd[3583]: time="2025-10-02T21:15:01.815610158Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 21:15:02 functional-029371 containerd[3583]: time="2025-10-02T21:15:02.101632205Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 21:15:02 functional-029371 containerd[3583]: time="2025-10-02T21:15:02.101682298Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	Oct 02 21:15:03 functional-029371 containerd[3583]: time="2025-10-02T21:15:03.670875878Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 02 21:15:03 functional-029371 containerd[3583]: time="2025-10-02T21:15:03.673365223Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 21:15:03 functional-029371 containerd[3583]: time="2025-10-02T21:15:03.837942402Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 21:15:04 functional-029371 containerd[3583]: time="2025-10-02T21:15:04.110451262Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 21:15:04 functional-029371 containerd[3583]: time="2025-10-02T21:15:04.110753888Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Oct 02 21:16:26 functional-029371 containerd[3583]: time="2025-10-02T21:16:26.670577887Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 02 21:16:26 functional-029371 containerd[3583]: time="2025-10-02T21:16:26.672981594Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 21:16:26 functional-029371 containerd[3583]: time="2025-10-02T21:16:26.805469895Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 21:16:27 functional-029371 containerd[3583]: time="2025-10-02T21:16:27.201476519Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 21:16:27 functional-029371 containerd[3583]: time="2025-10-02T21:16:27.201602027Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=12712"
	Oct 02 21:16:34 functional-029371 containerd[3583]: time="2025-10-02T21:16:34.671198235Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 02 21:16:34 functional-029371 containerd[3583]: time="2025-10-02T21:16:34.673717759Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 21:16:34 functional-029371 containerd[3583]: time="2025-10-02T21:16:34.797061756Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 21:16:35 functional-029371 containerd[3583]: time="2025-10-02T21:16:35.079563960Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 21:16:35 functional-029371 containerd[3583]: time="2025-10-02T21:16:35.079590225Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	
	
	==> coredns [6e626b9db7e71cca13b7f0fa58c29712669c45287dde64cab1606f74ddd60435] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36470 - 16817 "HINFO IN 8350429670381813791.6003931427677546625. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021892735s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e13a9218fb36c96f900452fa4804b05d1af634f65dabde0e99e4745bf3bdd984] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51972 - 42555 "HINFO IN 2726958689615771147.4044054909872593520. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.046976422s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               functional-029371
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-029371
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=functional-029371
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T21_01_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 21:01:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-029371
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 21:18:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 21:18:24 +0000   Thu, 02 Oct 2025 21:01:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 21:18:24 +0000   Thu, 02 Oct 2025 21:01:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 21:18:24 +0000   Thu, 02 Oct 2025 21:01:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 21:18:24 +0000   Thu, 02 Oct 2025 21:02:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-029371
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 fd50d735b20e43169e671ed5ecbfe749
	  System UUID:                482999fa-369e-4d58-bd97-98172b118eff
	  Boot ID:                    ddea27b5-1bb4-4ff4-b6ce-678e2308ca3c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-jvqz4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-node-connect-7d85dfc575-hf52j           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-66bc5c9577-bswh9                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 etcd-functional-029371                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kindnet-9zmhd                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-functional-029371              250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-functional-029371     200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-xd2gs                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-functional-029371              100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-vxvmb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-5d2qw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 17m                kube-proxy       
	  Normal   Starting                 15m                kube-proxy       
	  Warning  CgroupV1                 17m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node functional-029371 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node functional-029371 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node functional-029371 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 17m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  17m                kubelet          Node functional-029371 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m                kubelet          Node functional-029371 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m                kubelet          Node functional-029371 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           17m                node-controller  Node functional-029371 event: Registered Node functional-029371 in Controller
	  Normal   NodeReady                16m                kubelet          Node functional-029371 status is now: NodeReady
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node functional-029371 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node functional-029371 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node functional-029371 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           15m                node-controller  Node functional-029371 event: Registered Node functional-029371 in Controller
	
	
	==> dmesg <==
	[Oct 2 20:00] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
	[Oct 2 20:51] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [97c3f3f10874020e0999f5e88cbf6e33bbdc919ddc54715ffb8f68285cfb4890] <==
	{"level":"warn","ts":"2025-10-02T21:01:09.091121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:01:09.113696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:01:09.140304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:01:09.163581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:01:09.214029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:01:09.290338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38702","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T21:02:00.115325Z","caller":"traceutil/trace.go:172","msg":"trace[291594329] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"103.807712ms","start":"2025-10-02T21:02:00.011494Z","end":"2025-10-02T21:02:00.115302Z","steps":["trace[291594329] 'process raft request'  (duration: 103.662216ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T21:02:38.044223Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T21:02:38.044273Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-029371","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-02T21:02:38.044393Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T21:02:38.045901Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T21:02:38.047455Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T21:02:38.047522Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-02T21:02:38.047636Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-02T21:02:38.047654Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-02T21:02:38.047956Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T21:02:38.048012Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T21:02:38.048024Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-02T21:02:38.048106Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T21:02:38.048130Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T21:02:38.048140Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T21:02:38.050950Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-02T21:02:38.051086Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T21:02:38.051113Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-02T21:02:38.051121Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-029371","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [ff6176ec7ae2de2fdb8b2e8cbe1b6888a2b29bb1783765d18ed72f5fa5850090] <==
	{"level":"warn","ts":"2025-10-02T21:02:45.165726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.180102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.198523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.236131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.251965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.277163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.288844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.309935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.328810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.347740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.364690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.382356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.412787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.432090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.448087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.478001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.495710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.508017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.564423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38004","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T21:12:43.986090Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1031}
	{"level":"info","ts":"2025-10-02T21:12:44.008580Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1031,"took":"22.217035ms","hash":2016753172,"current-db-size-bytes":3067904,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1282048,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2025-10-02T21:12:44.008635Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2016753172,"revision":1031,"compact-revision":-1}
	{"level":"info","ts":"2025-10-02T21:17:43.993970Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1336}
	{"level":"info","ts":"2025-10-02T21:17:43.997789Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1336,"took":"3.314458ms","hash":3319506663,"current-db-size-bytes":3067904,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":2256896,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2025-10-02T21:17:43.997841Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3319506663,"revision":1336,"compact-revision":1031}
	
	
	==> kernel <==
	 21:18:27 up 17:00,  0 user,  load average: 0.51, 0.42, 1.41
	Linux functional-029371 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c0544bb436a0906cfd062760bdbcd21a2d29e77e585ae36ebb930aa43c485e98] <==
	I1002 21:16:18.711568       1 main.go:301] handling current node
	I1002 21:16:28.714322       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:16:28.714551       1 main.go:301] handling current node
	I1002 21:16:38.712076       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:16:38.712111       1 main.go:301] handling current node
	I1002 21:16:48.711548       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:16:48.711582       1 main.go:301] handling current node
	I1002 21:16:58.712703       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:16:58.712739       1 main.go:301] handling current node
	I1002 21:17:08.712002       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:17:08.712037       1 main.go:301] handling current node
	I1002 21:17:18.711245       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:17:18.711298       1 main.go:301] handling current node
	I1002 21:17:28.711954       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:17:28.712234       1 main.go:301] handling current node
	I1002 21:17:38.719902       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:17:38.719939       1 main.go:301] handling current node
	I1002 21:17:48.711430       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:17:48.711467       1 main.go:301] handling current node
	I1002 21:17:58.717657       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:17:58.717692       1 main.go:301] handling current node
	I1002 21:18:08.711611       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:18:08.711711       1 main.go:301] handling current node
	I1002 21:18:18.711818       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:18:18.711853       1 main.go:301] handling current node
	
	
	==> kindnet [fa91f8ea7d10ffa773ed1c591ed0215f42b98b1c763fd23db5df45e664688342] <==
	I1002 21:01:19.695785       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 21:01:19.696051       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1002 21:01:19.696173       1 main.go:148] setting mtu 1500 for CNI 
	I1002 21:01:19.696193       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 21:01:19.696203       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T21:01:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 21:01:19.891576       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 21:01:19.891800       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 21:01:19.891903       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 21:01:19.892752       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 21:01:49.891919       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 21:01:49.892931       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1002 21:01:49.892947       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 21:01:49.893282       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1002 21:01:51.493072       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 21:01:51.493164       1 metrics.go:72] Registering metrics
	I1002 21:01:51.493395       1 controller.go:711] "Syncing nftables rules"
	I1002 21:01:59.897194       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:01:59.897288       1 main.go:301] handling current node
	I1002 21:02:09.897832       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:02:09.897866       1 main.go:301] handling current node
	I1002 21:02:19.895132       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:02:19.895159       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ec86407873fe8df85e4887b5c5b2b21b30f5b2fe009c3928a9a2d4b98c874b5a] <==
	I1002 21:02:46.394733       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 21:02:46.394833       1 cache.go:39] Caches are synced for autoregister controller
	I1002 21:02:46.394932       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 21:02:46.408635       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 21:02:46.408912       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 21:02:46.410234       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 21:02:46.418967       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 21:02:46.721100       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 21:02:47.095033       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1002 21:02:47.431828       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1002 21:02:47.433331       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 21:02:47.439034       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 21:02:48.174490       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 21:02:48.324212       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 21:02:48.433235       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 21:02:48.444842       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 21:02:50.077548       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 21:03:03.853124       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.101.168.209"}
	I1002 21:03:10.555108       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.142.42"}
	I1002 21:03:19.105685       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.101.24.217"}
	I1002 21:07:19.362832       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.228.9"}
	I1002 21:12:46.321891       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 21:13:26.967023       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 21:13:27.282272       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.12.222"}
	I1002 21:13:27.330502       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.45.134"}
	
	
	==> kube-controller-manager [4a78f66b8de9abe5c9ae735c1c02e72e3256c9e5545188d321dac91ce1606b57] <==
	I1002 21:02:49.714957       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 21:02:49.717140       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 21:02:49.719238       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:02:49.719500       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 21:02:49.719678       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 21:02:49.719541       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 21:02:49.720464       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1002 21:02:49.719445       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 21:02:49.719430       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1002 21:02:49.726201       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 21:02:49.731606       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 21:02:49.731872       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 21:02:49.742298       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 21:02:49.756078       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 21:02:49.756275       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 21:02:49.760350       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:02:49.763444       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	E1002 21:13:27.092561       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 21:13:27.108361       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 21:13:27.132597       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 21:13:27.137658       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 21:13:27.148704       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 21:13:27.154897       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 21:13:27.176303       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 21:13:27.176277       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [9363aff35a4acb1420657199acac0ca01f30c32a92243e6ea96ec31d175aae16] <==
	I1002 21:02:30.185628       1 serving.go:386] Generated self-signed cert in-memory
	I1002 21:02:31.435116       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1002 21:02:31.435148       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:02:31.436649       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1002 21:02:31.436969       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1002 21:02:31.437036       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1002 21:02:31.437053       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1002 21:02:41.438751       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [71353644d4012c4d79872e308445fb70b121b226b85b2a01cfa5589208cf6cd7] <==
	I1002 21:01:19.631691       1 server_linux.go:53] "Using iptables proxy"
	I1002 21:01:19.774390       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 21:01:19.875501       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 21:01:19.875540       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 21:01:19.876304       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 21:01:19.929962       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:01:19.930014       1 server_linux.go:132] "Using iptables Proxier"
	I1002 21:01:19.933989       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 21:01:19.934490       1 server.go:527] "Version info" version="v1.34.1"
	I1002 21:01:19.934650       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:01:19.938894       1 config.go:106] "Starting endpoint slice config controller"
	I1002 21:01:19.939089       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 21:01:19.939124       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 21:01:19.939238       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 21:01:19.939971       1 config.go:200] "Starting service config controller"
	I1002 21:01:19.940126       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 21:01:19.940238       1 config.go:309] "Starting node config controller"
	I1002 21:01:19.940333       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 21:01:20.043367       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 21:01:20.043410       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 21:01:20.043424       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 21:01:20.048713       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [bb62981a90b2e6919f84a4d9b34bbfb6dbeaf7ea0fca18ddd27c59c4cc7382b7] <==
	I1002 21:02:28.761611       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1002 21:02:28.762697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-029371&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 21:02:30.012349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-029371&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 21:02:31.803349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-029371&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 21:02:36.781864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-029371&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1002 21:02:48.363238       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 21:02:48.365233       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 21:02:48.365530       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 21:02:48.400401       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:02:48.400613       1 server_linux.go:132] "Using iptables Proxier"
	I1002 21:02:48.415578       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 21:02:48.416007       1 server.go:527] "Version info" version="v1.34.1"
	I1002 21:02:48.416157       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:02:48.418783       1 config.go:200] "Starting service config controller"
	I1002 21:02:48.418810       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 21:02:48.419572       1 config.go:106] "Starting endpoint slice config controller"
	I1002 21:02:48.419695       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 21:02:48.419816       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 21:02:48.419937       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 21:02:48.420888       1 config.go:309] "Starting node config controller"
	I1002 21:02:48.421046       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 21:02:48.421155       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 21:02:48.436399       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 21:02:48.523114       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 21:02:48.592161       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0dd8df4eab17a4a504ba75dcd53063299a3901716a3ee868366c80c5f68c65a9] <==
	I1002 21:02:43.746760       1 serving.go:386] Generated self-signed cert in-memory
	W1002 21:02:46.263780       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 21:02:46.263822       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 21:02:46.263834       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 21:02:46.264102       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 21:02:46.381416       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 21:02:46.381449       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:02:46.389679       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:02:46.390180       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:02:46.393786       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 21:02:46.394631       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 21:02:46.490354       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [37a0176519c77084790d182a341b7648e186e2e1a614314dea11c7e9d8b9dcda] <==
	E1002 21:01:10.505154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 21:01:10.505442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 21:01:10.505584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 21:01:10.511789       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 21:01:10.512019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 21:01:10.512128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 21:01:10.512226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 21:01:10.512321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 21:01:10.512551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 21:01:10.516026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 21:01:10.516183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 21:01:11.341487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 21:01:11.368611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 21:01:11.429159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 21:01:11.434896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 21:01:11.488406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 21:01:11.577725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 21:01:11.588312       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1002 21:01:13.557394       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:02:38.107216       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1002 21:02:38.107251       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1002 21:02:38.107270       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1002 21:02:38.107389       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:02:38.107422       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1002 21:02:38.107482       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 02 21:17:05 functional-029371 kubelet[4514]: E1002 21:17:05.670477    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vxvmb" podUID="fc48a84c-994f-4117-b9fa-7e6a
8c84111d"
	Oct 02 21:17:10 functional-029371 kubelet[4514]: E1002 21:17:10.670667    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hf52j" podUID="3f468c29-a57d-4a49-b576-7dfbb2cf1868"
	Oct 02 21:17:10 functional-029371 kubelet[4514]: E1002 21:17:10.672158    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5d2qw" podUID="25599cb2-8561-4a59-8a41-2422fd861a9d"
	Oct 02 21:17:14 functional-029371 kubelet[4514]: E1002 21:17:14.669400    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d97caa8e-1329-4661-b54c-ddad7ae3095f"
	Oct 02 21:17:17 functional-029371 kubelet[4514]: E1002 21:17:17.670938    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vxvmb" podUID="fc48a84c-994f-4117-b9fa-7e6a
8c84111d"
	Oct 02 21:17:24 functional-029371 kubelet[4514]: E1002 21:17:24.671259    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hf52j" podUID="3f468c29-a57d-4a49-b576-7dfbb2cf1868"
	Oct 02 21:17:25 functional-029371 kubelet[4514]: E1002 21:17:25.670632    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5d2qw" podUID="25599cb2-8561-4a59-8a41-2422fd861a9d"
	Oct 02 21:17:28 functional-029371 kubelet[4514]: E1002 21:17:28.669411    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d97caa8e-1329-4661-b54c-ddad7ae3095f"
	Oct 02 21:17:29 functional-029371 kubelet[4514]: E1002 21:17:29.670603    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vxvmb" podUID="fc48a84c-994f-4117-b9fa-7e6a
8c84111d"
	Oct 02 21:17:39 functional-029371 kubelet[4514]: E1002 21:17:39.670208    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hf52j" podUID="3f468c29-a57d-4a49-b576-7dfbb2cf1868"
	Oct 02 21:17:39 functional-029371 kubelet[4514]: E1002 21:17:39.671127    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5d2qw" podUID="25599cb2-8561-4a59-8a41-2422fd861a9d"
	Oct 02 21:17:41 functional-029371 kubelet[4514]: E1002 21:17:41.670835    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vxvmb" podUID="fc48a84c-994f-4117-b9fa-7e6a
8c84111d"
	Oct 02 21:17:43 functional-029371 kubelet[4514]: E1002 21:17:43.669504    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d97caa8e-1329-4661-b54c-ddad7ae3095f"
	Oct 02 21:17:50 functional-029371 kubelet[4514]: E1002 21:17:50.669917    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hf52j" podUID="3f468c29-a57d-4a49-b576-7dfbb2cf1868"
	Oct 02 21:17:50 functional-029371 kubelet[4514]: E1002 21:17:50.671159    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5d2qw" podUID="25599cb2-8561-4a59-8a41-2422fd861a9d"
	Oct 02 21:17:53 functional-029371 kubelet[4514]: E1002 21:17:53.671247    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vxvmb" podUID="fc48a84c-994f-4117-b9fa-7e6a
8c84111d"
	Oct 02 21:17:57 functional-029371 kubelet[4514]: E1002 21:17:57.670084    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d97caa8e-1329-4661-b54c-ddad7ae3095f"
	Oct 02 21:18:03 functional-029371 kubelet[4514]: E1002 21:18:03.670520    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5d2qw" podUID="25599cb2-8561-4a59-8a41-2422fd861a9d"
	Oct 02 21:18:04 functional-029371 kubelet[4514]: E1002 21:18:04.671111    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hf52j" podUID="3f468c29-a57d-4a49-b576-7dfbb2cf1868"
	Oct 02 21:18:05 functional-029371 kubelet[4514]: E1002 21:18:05.670668    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vxvmb" podUID="fc48a84c-994f-4117-b9fa-7e6a
8c84111d"
	Oct 02 21:18:10 functional-029371 kubelet[4514]: E1002 21:18:10.670005    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d97caa8e-1329-4661-b54c-ddad7ae3095f"
	Oct 02 21:18:16 functional-029371 kubelet[4514]: E1002 21:18:16.670231    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5d2qw" podUID="25599cb2-8561-4a59-8a41-2422fd861a9d"
	Oct 02 21:18:17 functional-029371 kubelet[4514]: E1002 21:18:17.670742    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vxvmb" podUID="fc48a84c-994f-4117-b9fa-7e6a
8c84111d"
	Oct 02 21:18:19 functional-029371 kubelet[4514]: E1002 21:18:19.669826    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hf52j" podUID="3f468c29-a57d-4a49-b576-7dfbb2cf1868"
	Oct 02 21:18:22 functional-029371 kubelet[4514]: E1002 21:18:22.671837    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d97caa8e-1329-4661-b54c-ddad7ae3095f"
	
	
	==> storage-provisioner [9f4fa4e6cafcdf15d3a652b129916916db3a35a6bba6315257415306d82081ac] <==
	I1002 21:02:28.534730       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 21:02:28.536501       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [e9301c91add10f7b8320a98341322365ab0397a2b58eb545f437ffcdcab5d2df] <==
	W1002 21:18:02.743267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:18:04.746753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:18:04.751643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:18:06.754413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:18:06.759022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:18:08.762364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:18:08.769204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:18:10.772447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:18:10.776716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:18:12.779833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:18:12.784591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:18:14.788288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:18:14.795203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:18:16.798389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:18:16.805135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:18:18.808762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:18:18.813616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:18:20.816953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:18:20.821462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:18:22.824559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:18:22.831181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:18:24.834858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:18:24.839507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:18:26.844143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:18:26.849030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-029371 -n functional-029371
helpers_test.go:269: (dbg) Run:  kubectl --context functional-029371 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-connect-7d85dfc575-hf52j sp-pod dashboard-metrics-scraper-77bf4d6c4c-vxvmb kubernetes-dashboard-855c9754f9-5d2qw
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-029371 describe pod busybox-mount hello-node-connect-7d85dfc575-hf52j sp-pod dashboard-metrics-scraper-77bf4d6c4c-vxvmb kubernetes-dashboard-855c9754f9-5d2qw
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-029371 describe pod busybox-mount hello-node-connect-7d85dfc575-hf52j sp-pod dashboard-metrics-scraper-77bf4d6c4c-vxvmb kubernetes-dashboard-855c9754f9-5d2qw: exit status 1 (116.556676ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-029371/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 21:13:14 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  containerd://62517abd539ac304f96127e96a0abdc7c13e002fb58b18a9c2b13940f90130f7
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 02 Oct 2025 21:13:17 +0000
	      Finished:     Thu, 02 Oct 2025 21:13:17 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kczvz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-kczvz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m13s  default-scheduler  Successfully assigned default/busybox-mount to functional-029371
	  Normal  Pulling    5m13s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m11s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.262s (2.262s including waiting). Image size: 1935750 bytes.
	  Normal  Created    5m11s  kubelet            Created container: mount-munger
	  Normal  Started    5m11s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-connect-7d85dfc575-hf52j
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-029371/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 21:03:19 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xwrnm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xwrnm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  15m                default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-hf52j to functional-029371
	  Normal   Pulling    12m (x5 over 15m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     12m (x5 over 15m)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     12m (x5 over 15m)  kubelet            Error: ErrImagePull
	  Warning  Failed     5m (x43 over 15m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    9s (x64 over 15m)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-029371/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 21:03:16 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f9whq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-f9whq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  15m                default-scheduler  Successfully assigned default/sp-pod to functional-029371
	  Warning  Failed     13m (x4 over 15m)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    12m (x5 over 15m)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     12m (x5 over 15m)  kubelet            Error: ErrImagePull
	  Warning  Failed     12m                kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    6s (x63 over 15m)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     6s (x63 over 15m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-vxvmb" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-5d2qw" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-029371 describe pod busybox-mount hello-node-connect-7d85dfc575-hf52j sp-pod dashboard-metrics-scraper-77bf4d6c4c-vxvmb kubernetes-dashboard-855c9754f9-5d2qw: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (604.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-029371 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-029371 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-hf52j" [3f468c29-a57d-4a49-b576-7dfbb2cf1868] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1002 21:03:47.966625 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:06:04.104843 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:06:31.807942 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-029371 -n functional-029371
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-02 21:13:19.478410846 +0000 UTC m=+1264.243487205
functional_test.go:1645: (dbg) Run:  kubectl --context functional-029371 describe po hello-node-connect-7d85dfc575-hf52j -n default
functional_test.go:1645: (dbg) kubectl --context functional-029371 describe po hello-node-connect-7d85dfc575-hf52j -n default:
Name:             hello-node-connect-7d85dfc575-hf52j
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-029371/192.168.49.2
Start Time:       Thu, 02 Oct 2025 21:03:19 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xwrnm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-xwrnm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-hf52j to functional-029371
Normal   Pulling    7m9s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m8s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m8s (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m48s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m48s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-029371 logs hello-node-connect-7d85dfc575-hf52j -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-029371 logs hello-node-connect-7d85dfc575-hf52j -n default: exit status 1 (101.185934ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-hf52j" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-029371 logs hello-node-connect-7d85dfc575-hf52j -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-029371 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-hf52j
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-029371/192.168.49.2
Start Time:       Thu, 02 Oct 2025 21:03:19 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xwrnm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-xwrnm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-hf52j to functional-029371
Normal   Pulling    7m9s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m8s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m8s (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m48s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m48s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-029371 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-029371 logs -l app=hello-node-connect: exit status 1 (96.741382ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-hf52j" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-029371 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-029371 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.101.24.217
IPs:                      10.101.24.217
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30633/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-029371
helpers_test.go:243: (dbg) docker inspect functional-029371:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "090c5f703e0603ef3d534b06de534b9f38e45786405a99e53ae49aef5c3508b3",
	        "Created": "2025-10-02T21:00:51.978972474Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2811196,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:00:52.062744723Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/090c5f703e0603ef3d534b06de534b9f38e45786405a99e53ae49aef5c3508b3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/090c5f703e0603ef3d534b06de534b9f38e45786405a99e53ae49aef5c3508b3/hostname",
	        "HostsPath": "/var/lib/docker/containers/090c5f703e0603ef3d534b06de534b9f38e45786405a99e53ae49aef5c3508b3/hosts",
	        "LogPath": "/var/lib/docker/containers/090c5f703e0603ef3d534b06de534b9f38e45786405a99e53ae49aef5c3508b3/090c5f703e0603ef3d534b06de534b9f38e45786405a99e53ae49aef5c3508b3-json.log",
	        "Name": "/functional-029371",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-029371:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-029371",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "090c5f703e0603ef3d534b06de534b9f38e45786405a99e53ae49aef5c3508b3",
	                "LowerDir": "/var/lib/docker/overlay2/caf7df263035e1f28a1da9be1443cbf5d19bd61f80924c026053c54e47c04e30-init/diff:/var/lib/docker/overlay2/51331203fb22f22857c79ac4aca1f3d12d523fa3ef805f7f258c2d1849e728ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/caf7df263035e1f28a1da9be1443cbf5d19bd61f80924c026053c54e47c04e30/merged",
	                "UpperDir": "/var/lib/docker/overlay2/caf7df263035e1f28a1da9be1443cbf5d19bd61f80924c026053c54e47c04e30/diff",
	                "WorkDir": "/var/lib/docker/overlay2/caf7df263035e1f28a1da9be1443cbf5d19bd61f80924c026053c54e47c04e30/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-029371",
	                "Source": "/var/lib/docker/volumes/functional-029371/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-029371",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-029371",
	                "name.minikube.sigs.k8s.io": "functional-029371",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1fd369d7c675f494df1af8bbeb228ab303420ec6e440618440a08cd22840ddd9",
	            "SandboxKey": "/var/run/docker/netns/1fd369d7c675",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36127"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36128"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36131"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36129"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36130"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-029371": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:8a:b4:10:41:3b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "00e3563aa4808dcd5f3a224a2151deb754278db778c1a4a02e08e667b6d2949c",
	                    "EndpointID": "5ce2c0a1f336f8f0a42c5f4a14f366cc54ee230716ae07896a98b853c1146cb5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-029371",
	                        "090c5f703e06"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-029371 -n functional-029371
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-029371 logs -n 25: (1.831778315s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-029371 ssh -n functional-029371 sudo cat /home/docker/cp-test.txt                                              │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │ 02 Oct 25 21:03 UTC │
	│ config  │ functional-029371 config get cpus                                                                                         │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │                     │
	│ ssh     │ functional-029371 ssh echo hello                                                                                          │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │ 02 Oct 25 21:03 UTC │
	│ cp      │ functional-029371 cp functional-029371:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd315440799/001/cp-test.txt │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │ 02 Oct 25 21:03 UTC │
	│ ssh     │ functional-029371 ssh cat /etc/hostname                                                                                   │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │ 02 Oct 25 21:03 UTC │
	│ ssh     │ functional-029371 ssh -n functional-029371 sudo cat /home/docker/cp-test.txt                                              │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │ 02 Oct 25 21:03 UTC │
	│ tunnel  │ functional-029371 tunnel --alsologtostderr                                                                                │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │                     │
	│ tunnel  │ functional-029371 tunnel --alsologtostderr                                                                                │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │                     │
	│ cp      │ functional-029371 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                 │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │ 02 Oct 25 21:03 UTC │
	│ tunnel  │ functional-029371 tunnel --alsologtostderr                                                                                │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │                     │
	│ ssh     │ functional-029371 ssh -n functional-029371 sudo cat /tmp/does/not/exist/cp-test.txt                                       │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │ 02 Oct 25 21:03 UTC │
	│ addons  │ functional-029371 addons list                                                                                             │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │ 02 Oct 25 21:03 UTC │
	│ addons  │ functional-029371 addons list -o json                                                                                     │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │ 02 Oct 25 21:03 UTC │
	│ service │ functional-029371 service list                                                                                            │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ service │ functional-029371 service list -o json                                                                                    │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ service │ functional-029371 service --namespace=default --https --url hello-node                                                    │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ service │ functional-029371 service hello-node --url --format={{.IP}}                                                               │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ service │ functional-029371 service hello-node --url                                                                                │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ mount   │ -p functional-029371 /tmp/TestFunctionalparallelMountCmdany-port1095637664/001:/mount-9p --alsologtostderr -v=1           │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │                     │
	│ ssh     │ functional-029371 ssh findmnt -T /mount-9p | grep 9p                                                                      │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │                     │
	│ ssh     │ functional-029371 ssh findmnt -T /mount-9p | grep 9p                                                                      │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ ssh     │ functional-029371 ssh -- ls -la /mount-9p                                                                                 │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ ssh     │ functional-029371 ssh cat /mount-9p/test-1759439592859047824                                                              │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ ssh     │ functional-029371 ssh stat /mount-9p/created-by-test                                                                      │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │ 02 Oct 25 21:13 UTC │
	│ ssh     │ functional-029371 ssh stat /mount-9p/created-by-pod                                                                       │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:02:18
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:02:18.327574 2815477 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:02:18.327693 2815477 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:02:18.327697 2815477 out.go:374] Setting ErrFile to fd 2...
	I1002 21:02:18.327701 2815477 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:02:18.327945 2815477 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-2783765/.minikube/bin
	I1002 21:02:18.328299 2815477 out.go:368] Setting JSON to false
	I1002 21:02:18.329233 2815477 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":60288,"bootTime":1759378651,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1002 21:02:18.329314 2815477 start.go:140] virtualization:  
	I1002 21:02:18.332900 2815477 out.go:179] * [functional-029371] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:02:18.335853 2815477 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:02:18.335921 2815477 notify.go:220] Checking for updates...
	I1002 21:02:18.341622 2815477 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:02:18.344437 2815477 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-2783765/kubeconfig
	I1002 21:02:18.347286 2815477 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-2783765/.minikube
	I1002 21:02:18.350025 2815477 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:02:18.352944 2815477 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:02:18.356234 2815477 config.go:182] Loaded profile config "functional-029371": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 21:02:18.356368 2815477 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:02:18.387224 2815477 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:02:18.387381 2815477 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:02:18.454864 2815477 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-02 21:02:18.444590801 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:02:18.454973 2815477 docker.go:318] overlay module found
	I1002 21:02:18.458051 2815477 out.go:179] * Using the docker driver based on existing profile
	I1002 21:02:18.460868 2815477 start.go:304] selected driver: docker
	I1002 21:02:18.460877 2815477 start.go:924] validating driver "docker" against &{Name:functional-029371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-029371 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:02:18.460998 2815477 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:02:18.461129 2815477 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:02:18.520415 2815477 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-02 21:02:18.51090881 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:02:18.520834 2815477 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:02:18.520854 2815477 cni.go:84] Creating CNI manager for ""
	I1002 21:02:18.520911 2815477 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 21:02:18.520952 2815477 start.go:348] cluster config:
	{Name:functional-029371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-029371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:02:18.525815 2815477 out.go:179] * Starting "functional-029371" primary control-plane node in "functional-029371" cluster
	I1002 21:02:18.528712 2815477 cache.go:123] Beginning downloading kic base image for docker with containerd
	I1002 21:02:18.531602 2815477 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:02:18.534475 2815477 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1002 21:02:18.534526 2815477 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-2783765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1002 21:02:18.534540 2815477 cache.go:58] Caching tarball of preloaded images
	I1002 21:02:18.534570 2815477 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:02:18.534639 2815477 preload.go:233] Found /home/jenkins/minikube-integration/21682-2783765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 21:02:18.534647 2815477 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1002 21:02:18.534762 2815477 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/config.json ...
	I1002 21:02:18.554621 2815477 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:02:18.554633 2815477 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:02:18.554661 2815477 cache.go:232] Successfully downloaded all kic artifacts
	I1002 21:02:18.554684 2815477 start.go:360] acquireMachinesLock for functional-029371: {Name:mk4a1a504d880be64e2f8361d5fd38b59990af37 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:02:18.554753 2815477 start.go:364] duration metric: took 48.197µs to acquireMachinesLock for "functional-029371"
	I1002 21:02:18.554775 2815477 start.go:96] Skipping create...Using existing machine configuration
	I1002 21:02:18.554786 2815477 fix.go:54] fixHost starting: 
	I1002 21:02:18.555045 2815477 cli_runner.go:164] Run: docker container inspect functional-029371 --format={{.State.Status}}
	I1002 21:02:18.572013 2815477 fix.go:112] recreateIfNeeded on functional-029371: state=Running err=<nil>
	W1002 21:02:18.572033 2815477 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 21:02:18.575372 2815477 out.go:252] * Updating the running docker "functional-029371" container ...
	I1002 21:02:18.575412 2815477 machine.go:93] provisionDockerMachine start ...
	I1002 21:02:18.575507 2815477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-029371
	I1002 21:02:18.592392 2815477 main.go:141] libmachine: Using SSH client type: native
	I1002 21:02:18.592713 2815477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36127 <nil> <nil>}
	I1002 21:02:18.592721 2815477 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:02:18.726945 2815477 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-029371
	
	I1002 21:02:18.726959 2815477 ubuntu.go:182] provisioning hostname "functional-029371"
	I1002 21:02:18.727021 2815477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-029371
	I1002 21:02:18.745596 2815477 main.go:141] libmachine: Using SSH client type: native
	I1002 21:02:18.745894 2815477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36127 <nil> <nil>}
	I1002 21:02:18.745903 2815477 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-029371 && echo "functional-029371" | sudo tee /etc/hostname
	I1002 21:02:18.893021 2815477 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-029371
	
	I1002 21:02:18.893086 2815477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-029371
	I1002 21:02:18.913820 2815477 main.go:141] libmachine: Using SSH client type: native
	I1002 21:02:18.914150 2815477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36127 <nil> <nil>}
	I1002 21:02:18.914168 2815477 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-029371' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-029371/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-029371' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:02:19.055644 2815477 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:02:19.055668 2815477 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-2783765/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-2783765/.minikube}
	I1002 21:02:19.055690 2815477 ubuntu.go:190] setting up certificates
	I1002 21:02:19.055707 2815477 provision.go:84] configureAuth start
	I1002 21:02:19.055790 2815477 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-029371
	I1002 21:02:19.073703 2815477 provision.go:143] copyHostCerts
	I1002 21:02:19.073759 2815477 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-2783765/.minikube/ca.pem, removing ...
	I1002 21:02:19.073776 2815477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-2783765/.minikube/ca.pem
	I1002 21:02:19.073845 2815477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-2783765/.minikube/ca.pem (1078 bytes)
	I1002 21:02:19.073938 2815477 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-2783765/.minikube/cert.pem, removing ...
	I1002 21:02:19.073942 2815477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-2783765/.minikube/cert.pem
	I1002 21:02:19.073961 2815477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-2783765/.minikube/cert.pem (1123 bytes)
	I1002 21:02:19.074009 2815477 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-2783765/.minikube/key.pem, removing ...
	I1002 21:02:19.074012 2815477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-2783765/.minikube/key.pem
	I1002 21:02:19.074029 2815477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-2783765/.minikube/key.pem (1675 bytes)
	I1002 21:02:19.074079 2815477 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca-key.pem org=jenkins.functional-029371 san=[127.0.0.1 192.168.49.2 functional-029371 localhost minikube]
	I1002 21:02:19.360043 2815477 provision.go:177] copyRemoteCerts
	I1002 21:02:19.360097 2815477 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:02:19.360140 2815477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-029371
	I1002 21:02:19.377771 2815477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36127 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/functional-029371/id_rsa Username:docker}
	I1002 21:02:19.475182 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:02:19.493095 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 21:02:19.511348 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 21:02:19.529739 2815477 provision.go:87] duration metric: took 474.02038ms to configureAuth
	I1002 21:02:19.529756 2815477 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:02:19.529968 2815477 config.go:182] Loaded profile config "functional-029371": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 21:02:19.529974 2815477 machine.go:96] duration metric: took 954.557056ms to provisionDockerMachine
	I1002 21:02:19.529981 2815477 start.go:293] postStartSetup for "functional-029371" (driver="docker")
	I1002 21:02:19.529989 2815477 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:02:19.530036 2815477 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:02:19.530074 2815477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-029371
	I1002 21:02:19.550170 2815477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36127 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/functional-029371/id_rsa Username:docker}
	I1002 21:02:19.647315 2815477 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:02:19.650603 2815477 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:02:19.650622 2815477 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:02:19.650630 2815477 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-2783765/.minikube/addons for local assets ...
	I1002 21:02:19.650680 2815477 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-2783765/.minikube/files for local assets ...
	I1002 21:02:19.650755 2815477 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-2783765/.minikube/files/etc/ssl/certs/27856302.pem -> 27856302.pem in /etc/ssl/certs
	I1002 21:02:19.650830 2815477 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-2783765/.minikube/files/etc/test/nested/copy/2785630/hosts -> hosts in /etc/test/nested/copy/2785630
	I1002 21:02:19.650876 2815477 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2785630
	I1002 21:02:19.658354 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/files/etc/ssl/certs/27856302.pem --> /etc/ssl/certs/27856302.pem (1708 bytes)
	I1002 21:02:19.677965 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/files/etc/test/nested/copy/2785630/hosts --> /etc/test/nested/copy/2785630/hosts (40 bytes)
	I1002 21:02:19.695964 2815477 start.go:296] duration metric: took 165.969249ms for postStartSetup
	I1002 21:02:19.696050 2815477 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:02:19.696087 2815477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-029371
	I1002 21:02:19.712594 2815477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36127 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/functional-029371/id_rsa Username:docker}
	I1002 21:02:19.805107 2815477 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:02:19.810194 2815477 fix.go:56] duration metric: took 1.255406029s for fixHost
	I1002 21:02:19.810209 2815477 start.go:83] releasing machines lock for "functional-029371", held for 1.255449099s
	I1002 21:02:19.810284 2815477 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-029371
	I1002 21:02:19.830377 2815477 ssh_runner.go:195] Run: cat /version.json
	I1002 21:02:19.830419 2815477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-029371
	I1002 21:02:19.830725 2815477 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:02:19.830772 2815477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-029371
	I1002 21:02:19.854224 2815477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36127 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/functional-029371/id_rsa Username:docker}
	I1002 21:02:19.856708 2815477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36127 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/functional-029371/id_rsa Username:docker}
	I1002 21:02:20.046165 2815477 ssh_runner.go:195] Run: systemctl --version
	I1002 21:02:20.054787 2815477 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:02:20.060565 2815477 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:02:20.060633 2815477 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:02:20.069479 2815477 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 21:02:20.069494 2815477 start.go:495] detecting cgroup driver to use...
	I1002 21:02:20.069525 2815477 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 21:02:20.069572 2815477 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1002 21:02:20.086314 2815477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 21:02:20.102278 2815477 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:02:20.102334 2815477 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:02:20.120181 2815477 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:02:20.135751 2815477 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:02:20.282482 2815477 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:02:20.435863 2815477 docker.go:234] disabling docker service ...
	I1002 21:02:20.435917 2815477 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:02:20.453580 2815477 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:02:20.467114 2815477 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:02:20.605677 2815477 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:02:20.748817 2815477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:02:20.762948 2815477 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:02:20.779110 2815477 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1002 21:02:20.789474 2815477 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 21:02:20.799194 2815477 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 21:02:20.799250 2815477 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 21:02:20.809723 2815477 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 21:02:20.819156 2815477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 21:02:20.828438 2815477 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 21:02:20.837630 2815477 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:02:20.845533 2815477 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 21:02:20.854493 2815477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1002 21:02:20.863163 2815477 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1002 21:02:20.872496 2815477 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:02:20.879706 2815477 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:02:20.887365 2815477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:02:21.029034 2815477 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 21:02:21.349477 2815477 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1002 21:02:21.349535 2815477 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1002 21:02:21.353746 2815477 start.go:563] Will wait 60s for crictl version
	I1002 21:02:21.353799 2815477 ssh_runner.go:195] Run: which crictl
	I1002 21:02:21.357762 2815477 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:02:21.384810 2815477 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.28
	RuntimeApiVersion:  v1
	I1002 21:02:21.384873 2815477 ssh_runner.go:195] Run: containerd --version
	I1002 21:02:21.409344 2815477 ssh_runner.go:195] Run: containerd --version
	I1002 21:02:21.438032 2815477 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
	I1002 21:02:21.441096 2815477 cli_runner.go:164] Run: docker network inspect functional-029371 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:02:21.457329 2815477 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 21:02:21.464511 2815477 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1002 21:02:21.467499 2815477 kubeadm.go:883] updating cluster {Name:functional-029371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-029371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:02:21.467622 2815477 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1002 21:02:21.467726 2815477 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:02:21.494454 2815477 containerd.go:627] all images are preloaded for containerd runtime.
	I1002 21:02:21.494466 2815477 containerd.go:534] Images already preloaded, skipping extraction
	I1002 21:02:21.494532 2815477 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:02:21.522744 2815477 containerd.go:627] all images are preloaded for containerd runtime.
	I1002 21:02:21.522756 2815477 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:02:21.522762 2815477 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 containerd true true} ...
	I1002 21:02:21.522857 2815477 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-029371 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-029371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:02:21.522923 2815477 ssh_runner.go:195] Run: sudo crictl info
	I1002 21:02:21.550642 2815477 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1002 21:02:21.550659 2815477 cni.go:84] Creating CNI manager for ""
	I1002 21:02:21.550667 2815477 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 21:02:21.550676 2815477 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:02:21.550699 2815477 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-029371 NodeName:functional-029371 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfi
gOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:02:21.550809 2815477 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-029371"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:02:21.550872 2815477 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:02:21.559173 2815477 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:02:21.559240 2815477 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:02:21.567410 2815477 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1002 21:02:21.581080 2815477 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:02:21.594319 2815477 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2080 bytes)
	I1002 21:02:21.607749 2815477 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:02:21.611840 2815477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:02:21.750717 2815477 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:02:21.764532 2815477 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371 for IP: 192.168.49.2
	I1002 21:02:21.764544 2815477 certs.go:195] generating shared ca certs ...
	I1002 21:02:21.764559 2815477 certs.go:227] acquiring lock for ca certs: {Name:mk9dd0ab4a99d312fca91f03b1dec8574d28a55e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:02:21.764715 2815477 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/ca.key
	I1002 21:02:21.764757 2815477 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/proxy-client-ca.key
	I1002 21:02:21.764763 2815477 certs.go:257] generating profile certs ...
	I1002 21:02:21.764842 2815477 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/client.key
	I1002 21:02:21.764886 2815477 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/apiserver.key.13d3535d
	I1002 21:02:21.764924 2815477 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/proxy-client.key
	I1002 21:02:21.765029 2815477 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/2785630.pem (1338 bytes)
	W1002 21:02:21.765053 2815477 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/2785630_empty.pem, impossibly tiny 0 bytes
	I1002 21:02:21.765060 2815477 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 21:02:21.765081 2815477 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:02:21.765100 2815477 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:02:21.765124 2815477 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/key.pem (1675 bytes)
	I1002 21:02:21.765167 2815477 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/files/etc/ssl/certs/27856302.pem (1708 bytes)
	I1002 21:02:21.765738 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:02:21.787950 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 21:02:21.811445 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:02:21.831379 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 21:02:21.849949 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 21:02:21.868825 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:02:21.894520 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:02:21.912602 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 21:02:21.930598 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/2785630.pem --> /usr/share/ca-certificates/2785630.pem (1338 bytes)
	I1002 21:02:21.949599 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/files/etc/ssl/certs/27856302.pem --> /usr/share/ca-certificates/27856302.pem (1708 bytes)
	I1002 21:02:21.968165 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:02:21.985867 2815477 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:02:21.999642 2815477 ssh_runner.go:195] Run: openssl version
	I1002 21:02:22.008041 2815477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2785630.pem && ln -fs /usr/share/ca-certificates/2785630.pem /etc/ssl/certs/2785630.pem"
	I1002 21:02:22.018009 2815477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2785630.pem
	I1002 21:02:22.023044 2815477 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:00 /usr/share/ca-certificates/2785630.pem
	I1002 21:02:22.023105 2815477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2785630.pem
	I1002 21:02:22.080761 2815477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2785630.pem /etc/ssl/certs/51391683.0"
	I1002 21:02:22.089921 2815477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27856302.pem && ln -fs /usr/share/ca-certificates/27856302.pem /etc/ssl/certs/27856302.pem"
	I1002 21:02:22.101316 2815477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27856302.pem
	I1002 21:02:22.105417 2815477 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:00 /usr/share/ca-certificates/27856302.pem
	I1002 21:02:22.105474 2815477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27856302.pem
	I1002 21:02:22.146642 2815477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/27856302.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:02:22.155623 2815477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:02:22.164052 2815477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:02:22.167667 2815477 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:53 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:02:22.167721 2815477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:02:22.208766 2815477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:02:22.216829 2815477 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:02:22.220598 2815477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 21:02:22.261495 2815477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 21:02:22.304687 2815477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 21:02:22.346589 2815477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 21:02:22.389278 2815477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 21:02:22.432804 2815477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 21:02:22.473902 2815477 kubeadm.go:400] StartCluster: {Name:functional-029371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-029371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:02:22.473988 2815477 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1002 21:02:22.474053 2815477 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:02:22.513156 2815477 cri.go:89] found id: "6e626b9db7e71cca13b7f0fa58c29712669c45287dde64cab1606f74ddd60435"
	I1002 21:02:22.513167 2815477 cri.go:89] found id: "5f049c6aa0114697506f2f6717e1c5a38f71dc40f78621d440f404815451043c"
	I1002 21:02:22.513170 2815477 cri.go:89] found id: "fa91f8ea7d10ffa773ed1c591ed0215f42b98b1c763fd23db5df45e664688342"
	I1002 21:02:22.513177 2815477 cri.go:89] found id: "71353644d4012c4d79872e308445fb70b121b226b85b2a01cfa5589208cf6cd7"
	I1002 21:02:22.513179 2815477 cri.go:89] found id: "d18fdfc3a6f406846a8cdeffa127bffb9405b08711f578531fc002506fba701a"
	I1002 21:02:22.513182 2815477 cri.go:89] found id: "928e0db0088fae774640e7903fc78932ae86b3ad46996ebf271695c248105539"
	I1002 21:02:22.513184 2815477 cri.go:89] found id: "97c3f3f10874020e0999f5e88cbf6e33bbdc919ddc54715ffb8f68285cfb4890"
	I1002 21:02:22.513187 2815477 cri.go:89] found id: "37a0176519c77084790d182a341b7648e186e2e1a614314dea11c7e9d8b9dcda"
	I1002 21:02:22.513189 2815477 cri.go:89] found id: ""
	I1002 21:02:22.513244 2815477 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1002 21:02:22.541939 2815477 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"095dc989df9d352fb47a553ed491bcb75c5e4a1d143b880788ad02909ec3c9e9","pid":1728,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/095dc989df9d352fb47a553ed491bcb75c5e4a1d143b880788ad02909ec3c9e9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/095dc989df9d352fb47a553ed491bcb75c5e4a1d143b880788ad02909ec3c9e9/rootfs","created":"2025-10-02T21:01:19.281829803Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"095dc989df9d352fb47a553ed491bcb75c5e4a1d143b880788ad02909ec3c9e9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-xd2gs_9f8999eb-7efb-417d-9a06-398ee7234f0b","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-xd2gs","io.kubernetes.cri.sandbox-
namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9f8999eb-7efb-417d-9a06-398ee7234f0b"},"owner":"root"},{"ociVersion":"1.2.1","id":"28a525d91513d095353611917acb50b9d14fac9c66b4f813cfc45eee15ed39c1","pid":2091,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/28a525d91513d095353611917acb50b9d14fac9c66b4f813cfc45eee15ed39c1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/28a525d91513d095353611917acb50b9d14fac9c66b4f813cfc45eee15ed39c1/rootfs","created":"2025-10-02T21:02:00.69289394Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"28a525d91513d095353611917acb50b9d14fac9c66b4f813cfc45eee15ed39c1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-66bc5c9577-bswh9_effda912-e3ee-4d9f-af34-8abe9a9d3659","io.kubernetes.cri.sandbox-memory":"178257920","io.kube
rnetes.cri.sandbox-name":"coredns-66bc5c9577-bswh9","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"effda912-e3ee-4d9f-af34-8abe9a9d3659"},"owner":"root"},{"ociVersion":"1.2.1","id":"37a0176519c77084790d182a341b7648e186e2e1a614314dea11c7e9d8b9dcda","pid":1314,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/37a0176519c77084790d182a341b7648e186e2e1a614314dea11c7e9d8b9dcda","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/37a0176519c77084790d182a341b7648e186e2e1a614314dea11c7e9d8b9dcda/rootfs","created":"2025-10-02T21:01:06.242121573Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri.sandbox-id":"7bbfe7c234b3a898c2ddb8c8a97d591e5f88a1d679c44d92486d4defc9167052","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-029371","io.kubernetes.cri.sandbox-namespace":"kube-syst
em","io.kubernetes.cri.sandbox-uid":"d6601939fa1d9587e15055ca9ac3c312"},"owner":"root"},{"ociVersion":"1.2.1","id":"5f049c6aa0114697506f2f6717e1c5a38f71dc40f78621d440f404815451043c","pid":2125,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5f049c6aa0114697506f2f6717e1c5a38f71dc40f78621d440f404815451043c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5f049c6aa0114697506f2f6717e1c5a38f71dc40f78621d440f404815451043c/rootfs","created":"2025-10-02T21:02:00.753021077Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"d016164eeb92f9a704af715d3c123e7de84043633d9dc823690f2f6925faed45","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f02122ca-7ec7-49b6-a4fc-f334ffb1ff51"},"owner":"root"},{"ociVersion":"1
.2.1","id":"6e626b9db7e71cca13b7f0fa58c29712669c45287dde64cab1606f74ddd60435","pid":2167,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6e626b9db7e71cca13b7f0fa58c29712669c45287dde64cab1606f74ddd60435","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6e626b9db7e71cca13b7f0fa58c29712669c45287dde64cab1606f74ddd60435/rootfs","created":"2025-10-02T21:02:00.834467894Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.12.1","io.kubernetes.cri.sandbox-id":"28a525d91513d095353611917acb50b9d14fac9c66b4f813cfc45eee15ed39c1","io.kubernetes.cri.sandbox-name":"coredns-66bc5c9577-bswh9","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"effda912-e3ee-4d9f-af34-8abe9a9d3659"},"owner":"root"},{"ociVersion":"1.2.1","id":"71353644d4012c4d79872e308445fb70b121b226b85b2a01cfa5589208cf6cd7","pid":1783,"status":"running","bundl
e":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/71353644d4012c4d79872e308445fb70b121b226b85b2a01cfa5589208cf6cd7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/71353644d4012c4d79872e308445fb70b121b226b85b2a01cfa5589208cf6cd7/rootfs","created":"2025-10-02T21:01:19.507507185Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.34.1","io.kubernetes.cri.sandbox-id":"095dc989df9d352fb47a553ed491bcb75c5e4a1d143b880788ad02909ec3c9e9","io.kubernetes.cri.sandbox-name":"kube-proxy-xd2gs","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9f8999eb-7efb-417d-9a06-398ee7234f0b"},"owner":"root"},{"ociVersion":"1.2.1","id":"7bbfe7c234b3a898c2ddb8c8a97d591e5f88a1d679c44d92486d4defc9167052","pid":1164,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7bbfe7c234b3a898c2ddb8c8a97d591e5f88a1d679c44d92486d4defc9167052","
rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7bbfe7c234b3a898c2ddb8c8a97d591e5f88a1d679c44d92486d4defc9167052/rootfs","created":"2025-10-02T21:01:06.054376326Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"7bbfe7c234b3a898c2ddb8c8a97d591e5f88a1d679c44d92486d4defc9167052","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-029371_d6601939fa1d9587e15055ca9ac3c312","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-029371","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"d6601939fa1d9587e15055ca9ac3c312"},"owner":"root"},{"ociVersion":"1.2.1","id":"928e0db0088fae774640e7903fc78932ae86b3ad46996ebf271695c248105539","pid":1404,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2
.task/k8s.io/928e0db0088fae774640e7903fc78932ae86b3ad46996ebf271695c248105539","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/928e0db0088fae774640e7903fc78932ae86b3ad46996ebf271695c248105539/rootfs","created":"2025-10-02T21:01:06.367174341Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.1","io.kubernetes.cri.sandbox-id":"b59d311db6e5ba70c65d985ff36bfa51b9dbd0dc4dfc8ac8d2873fc8df2afaf5","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-029371","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"eb13f944745743fe45a252f830c55d2d"},"owner":"root"},{"ociVersion":"1.2.1","id":"97c3f3f10874020e0999f5e88cbf6e33bbdc919ddc54715ffb8f68285cfb4890","pid":1341,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/97c3f3f10874020e0999f5e88cbf6e33bbdc919ddc54715ffb8f68285cfb4890","rootfs":"/run/containerd
/io.containerd.runtime.v2.task/k8s.io/97c3f3f10874020e0999f5e88cbf6e33bbdc919ddc54715ffb8f68285cfb4890/rootfs","created":"2025-10-02T21:01:06.278743308Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"f385ef2d71fec71d4f8e6559453a98e806da7a8c7644b0214fc3fe769cb8e57c","io.kubernetes.cri.sandbox-name":"etcd-functional-029371","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f863e7803f44c7150b86910fae3132d1"},"owner":"root"},{"ociVersion":"1.2.1","id":"b59d311db6e5ba70c65d985ff36bfa51b9dbd0dc4dfc8ac8d2873fc8df2afaf5","pid":1248,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b59d311db6e5ba70c65d985ff36bfa51b9dbd0dc4dfc8ac8d2873fc8df2afaf5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b59d311db6e5ba70c65d985ff36bfa51b9dbd0dc4dfc8ac8d2873fc8df2afaf5/rootfs","created":"2025-10-02T2
1:01:06.150511745Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"b59d311db6e5ba70c65d985ff36bfa51b9dbd0dc4dfc8ac8d2873fc8df2afaf5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-029371_eb13f944745743fe45a252f830c55d2d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-029371","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"eb13f944745743fe45a252f830c55d2d"},"owner":"root"},{"ociVersion":"1.2.1","id":"d016164eeb92f9a704af715d3c123e7de84043633d9dc823690f2f6925faed45","pid":2029,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d016164eeb92f9a704af715d3c123e7de84043633d9dc823690f2f6925faed45","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d016164eeb92f9a7
04af715d3c123e7de84043633d9dc823690f2f6925faed45/rootfs","created":"2025-10-02T21:02:00.633606611Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"d016164eeb92f9a704af715d3c123e7de84043633d9dc823690f2f6925faed45","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_f02122ca-7ec7-49b6-a4fc-f334ffb1ff51","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f02122ca-7ec7-49b6-a4fc-f334ffb1ff51"},"owner":"root"},{"ociVersion":"1.2.1","id":"d18fdfc3a6f406846a8cdeffa127bffb9405b08711f578531fc002506fba701a","pid":1415,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d18fdfc3a6f406846a8cdeffa127bffb9405b08711f578531fc002506fba701a","rootfs":"/run/cont
ainerd/io.containerd.runtime.v2.task/k8s.io/d18fdfc3a6f406846a8cdeffa127bffb9405b08711f578531fc002506fba701a/rootfs","created":"2025-10-02T21:01:06.394823582Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri.sandbox-id":"f8f65514862b2f4f45b9907e92c9331a8fcc3d3b84cc4be98d04604b846c0a3f","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-029371","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0bd369b26fb29618a00350f05f199620"},"owner":"root"},{"ociVersion":"1.2.1","id":"ebe0641167404545fd9dd5edf0b199e21f9a078f621b762a8223e54ea012cdde","pid":1700,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ebe0641167404545fd9dd5edf0b199e21f9a078f621b762a8223e54ea012cdde","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ebe0641167404545fd9dd5edf0b199e21
f9a078f621b762a8223e54ea012cdde/rootfs","created":"2025-10-02T21:01:19.256501217Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ebe0641167404545fd9dd5edf0b199e21f9a078f621b762a8223e54ea012cdde","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-9zmhd_2d6be820-35d6-4183-800b-2b4a0971e0bc","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-9zmhd","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2d6be820-35d6-4183-800b-2b4a0971e0bc"},"owner":"root"},{"ociVersion":"1.2.1","id":"f385ef2d71fec71d4f8e6559453a98e806da7a8c7644b0214fc3fe769cb8e57c","pid":1206,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f385ef2d71fec71d4f8e6559453a98e806da7a8c7644b0214fc3fe769cb8e57c","rootfs":"/run/containerd/io.contai
nerd.runtime.v2.task/k8s.io/f385ef2d71fec71d4f8e6559453a98e806da7a8c7644b0214fc3fe769cb8e57c/rootfs","created":"2025-10-02T21:01:06.097545635Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"f385ef2d71fec71d4f8e6559453a98e806da7a8c7644b0214fc3fe769cb8e57c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-029371_f863e7803f44c7150b86910fae3132d1","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-functional-029371","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f863e7803f44c7150b86910fae3132d1"},"owner":"root"},{"ociVersion":"1.2.1","id":"f8f65514862b2f4f45b9907e92c9331a8fcc3d3b84cc4be98d04604b846c0a3f","pid":1273,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f8f65514862b2f4f45b9907e92c9331a8fcc3d3b8
4cc4be98d04604b846c0a3f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f8f65514862b2f4f45b9907e92c9331a8fcc3d3b84cc4be98d04604b846c0a3f/rootfs","created":"2025-10-02T21:01:06.171730182Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"f8f65514862b2f4f45b9907e92c9331a8fcc3d3b84cc4be98d04604b846c0a3f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-029371_0bd369b26fb29618a00350f05f199620","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-029371","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0bd369b26fb29618a00350f05f199620"},"owner":"root"},{"ociVersion":"1.2.1","id":"fa91f8ea7d10ffa773ed1c591ed0215f42b98b1c763fd23db5df45e664688342","pid":1781,"status":"running","bundl
e":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa91f8ea7d10ffa773ed1c591ed0215f42b98b1c763fd23db5df45e664688342","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa91f8ea7d10ffa773ed1c591ed0215f42b98b1c763fd23db5df45e664688342/rootfs","created":"2025-10-02T21:01:19.500303822Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20250512-df8de77b","io.kubernetes.cri.sandbox-id":"ebe0641167404545fd9dd5edf0b199e21f9a078f621b762a8223e54ea012cdde","io.kubernetes.cri.sandbox-name":"kindnet-9zmhd","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2d6be820-35d6-4183-800b-2b4a0971e0bc"},"owner":"root"}]
	I1002 21:02:22.542232 2815477 cri.go:126] list returned 16 containers
	I1002 21:02:22.542240 2815477 cri.go:129] container: {ID:095dc989df9d352fb47a553ed491bcb75c5e4a1d143b880788ad02909ec3c9e9 Status:running}
	I1002 21:02:22.542260 2815477 cri.go:131] skipping 095dc989df9d352fb47a553ed491bcb75c5e4a1d143b880788ad02909ec3c9e9 - not in ps
	I1002 21:02:22.542264 2815477 cri.go:129] container: {ID:28a525d91513d095353611917acb50b9d14fac9c66b4f813cfc45eee15ed39c1 Status:running}
	I1002 21:02:22.542269 2815477 cri.go:131] skipping 28a525d91513d095353611917acb50b9d14fac9c66b4f813cfc45eee15ed39c1 - not in ps
	I1002 21:02:22.542271 2815477 cri.go:129] container: {ID:37a0176519c77084790d182a341b7648e186e2e1a614314dea11c7e9d8b9dcda Status:running}
	I1002 21:02:22.542277 2815477 cri.go:135] skipping {37a0176519c77084790d182a341b7648e186e2e1a614314dea11c7e9d8b9dcda running}: state = "running", want "paused"
	I1002 21:02:22.542284 2815477 cri.go:129] container: {ID:5f049c6aa0114697506f2f6717e1c5a38f71dc40f78621d440f404815451043c Status:running}
	I1002 21:02:22.542289 2815477 cri.go:135] skipping {5f049c6aa0114697506f2f6717e1c5a38f71dc40f78621d440f404815451043c running}: state = "running", want "paused"
	I1002 21:02:22.542295 2815477 cri.go:129] container: {ID:6e626b9db7e71cca13b7f0fa58c29712669c45287dde64cab1606f74ddd60435 Status:running}
	I1002 21:02:22.542300 2815477 cri.go:135] skipping {6e626b9db7e71cca13b7f0fa58c29712669c45287dde64cab1606f74ddd60435 running}: state = "running", want "paused"
	I1002 21:02:22.542304 2815477 cri.go:129] container: {ID:71353644d4012c4d79872e308445fb70b121b226b85b2a01cfa5589208cf6cd7 Status:running}
	I1002 21:02:22.542308 2815477 cri.go:135] skipping {71353644d4012c4d79872e308445fb70b121b226b85b2a01cfa5589208cf6cd7 running}: state = "running", want "paused"
	I1002 21:02:22.542312 2815477 cri.go:129] container: {ID:7bbfe7c234b3a898c2ddb8c8a97d591e5f88a1d679c44d92486d4defc9167052 Status:running}
	I1002 21:02:22.542317 2815477 cri.go:131] skipping 7bbfe7c234b3a898c2ddb8c8a97d591e5f88a1d679c44d92486d4defc9167052 - not in ps
	I1002 21:02:22.542320 2815477 cri.go:129] container: {ID:928e0db0088fae774640e7903fc78932ae86b3ad46996ebf271695c248105539 Status:running}
	I1002 21:02:22.542325 2815477 cri.go:135] skipping {928e0db0088fae774640e7903fc78932ae86b3ad46996ebf271695c248105539 running}: state = "running", want "paused"
	I1002 21:02:22.542329 2815477 cri.go:129] container: {ID:97c3f3f10874020e0999f5e88cbf6e33bbdc919ddc54715ffb8f68285cfb4890 Status:running}
	I1002 21:02:22.542336 2815477 cri.go:135] skipping {97c3f3f10874020e0999f5e88cbf6e33bbdc919ddc54715ffb8f68285cfb4890 running}: state = "running", want "paused"
	I1002 21:02:22.542340 2815477 cri.go:129] container: {ID:b59d311db6e5ba70c65d985ff36bfa51b9dbd0dc4dfc8ac8d2873fc8df2afaf5 Status:running}
	I1002 21:02:22.542344 2815477 cri.go:131] skipping b59d311db6e5ba70c65d985ff36bfa51b9dbd0dc4dfc8ac8d2873fc8df2afaf5 - not in ps
	I1002 21:02:22.542348 2815477 cri.go:129] container: {ID:d016164eeb92f9a704af715d3c123e7de84043633d9dc823690f2f6925faed45 Status:running}
	I1002 21:02:22.542352 2815477 cri.go:131] skipping d016164eeb92f9a704af715d3c123e7de84043633d9dc823690f2f6925faed45 - not in ps
	I1002 21:02:22.542356 2815477 cri.go:129] container: {ID:d18fdfc3a6f406846a8cdeffa127bffb9405b08711f578531fc002506fba701a Status:running}
	I1002 21:02:22.542361 2815477 cri.go:135] skipping {d18fdfc3a6f406846a8cdeffa127bffb9405b08711f578531fc002506fba701a running}: state = "running", want "paused"
	I1002 21:02:22.542366 2815477 cri.go:129] container: {ID:ebe0641167404545fd9dd5edf0b199e21f9a078f621b762a8223e54ea012cdde Status:running}
	I1002 21:02:22.542370 2815477 cri.go:131] skipping ebe0641167404545fd9dd5edf0b199e21f9a078f621b762a8223e54ea012cdde - not in ps
	I1002 21:02:22.542372 2815477 cri.go:129] container: {ID:f385ef2d71fec71d4f8e6559453a98e806da7a8c7644b0214fc3fe769cb8e57c Status:running}
	I1002 21:02:22.542377 2815477 cri.go:131] skipping f385ef2d71fec71d4f8e6559453a98e806da7a8c7644b0214fc3fe769cb8e57c - not in ps
	I1002 21:02:22.542380 2815477 cri.go:129] container: {ID:f8f65514862b2f4f45b9907e92c9331a8fcc3d3b84cc4be98d04604b846c0a3f Status:running}
	I1002 21:02:22.542384 2815477 cri.go:131] skipping f8f65514862b2f4f45b9907e92c9331a8fcc3d3b84cc4be98d04604b846c0a3f - not in ps
	I1002 21:02:22.542386 2815477 cri.go:129] container: {ID:fa91f8ea7d10ffa773ed1c591ed0215f42b98b1c763fd23db5df45e664688342 Status:running}
	I1002 21:02:22.542393 2815477 cri.go:135] skipping {fa91f8ea7d10ffa773ed1c591ed0215f42b98b1c763fd23db5df45e664688342 running}: state = "running", want "paused"
	I1002 21:02:22.542448 2815477 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:02:22.550817 2815477 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 21:02:22.550836 2815477 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 21:02:22.550886 2815477 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 21:02:22.558336 2815477 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:02:22.558844 2815477 kubeconfig.go:125] found "functional-029371" server: "https://192.168.49.2:8441"
	I1002 21:02:22.560196 2815477 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 21:02:22.569716 2815477 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-02 21:01:00.929404548 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-02 21:02:21.602771014 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1002 21:02:22.569725 2815477 kubeadm.go:1160] stopping kube-system containers ...
	I1002 21:02:22.569736 2815477 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1002 21:02:22.569793 2815477 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:02:22.598953 2815477 cri.go:89] found id: "6e626b9db7e71cca13b7f0fa58c29712669c45287dde64cab1606f74ddd60435"
	I1002 21:02:22.598965 2815477 cri.go:89] found id: "5f049c6aa0114697506f2f6717e1c5a38f71dc40f78621d440f404815451043c"
	I1002 21:02:22.598968 2815477 cri.go:89] found id: "fa91f8ea7d10ffa773ed1c591ed0215f42b98b1c763fd23db5df45e664688342"
	I1002 21:02:22.598971 2815477 cri.go:89] found id: "71353644d4012c4d79872e308445fb70b121b226b85b2a01cfa5589208cf6cd7"
	I1002 21:02:22.598979 2815477 cri.go:89] found id: "d18fdfc3a6f406846a8cdeffa127bffb9405b08711f578531fc002506fba701a"
	I1002 21:02:22.598982 2815477 cri.go:89] found id: "928e0db0088fae774640e7903fc78932ae86b3ad46996ebf271695c248105539"
	I1002 21:02:22.598985 2815477 cri.go:89] found id: "97c3f3f10874020e0999f5e88cbf6e33bbdc919ddc54715ffb8f68285cfb4890"
	I1002 21:02:22.598987 2815477 cri.go:89] found id: "37a0176519c77084790d182a341b7648e186e2e1a614314dea11c7e9d8b9dcda"
	I1002 21:02:22.598990 2815477 cri.go:89] found id: ""
	I1002 21:02:22.598994 2815477 cri.go:252] Stopping containers: [6e626b9db7e71cca13b7f0fa58c29712669c45287dde64cab1606f74ddd60435 5f049c6aa0114697506f2f6717e1c5a38f71dc40f78621d440f404815451043c fa91f8ea7d10ffa773ed1c591ed0215f42b98b1c763fd23db5df45e664688342 71353644d4012c4d79872e308445fb70b121b226b85b2a01cfa5589208cf6cd7 d18fdfc3a6f406846a8cdeffa127bffb9405b08711f578531fc002506fba701a 928e0db0088fae774640e7903fc78932ae86b3ad46996ebf271695c248105539 97c3f3f10874020e0999f5e88cbf6e33bbdc919ddc54715ffb8f68285cfb4890 37a0176519c77084790d182a341b7648e186e2e1a614314dea11c7e9d8b9dcda]
	I1002 21:02:22.599061 2815477 ssh_runner.go:195] Run: which crictl
	I1002 21:02:22.603148 2815477 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 6e626b9db7e71cca13b7f0fa58c29712669c45287dde64cab1606f74ddd60435 5f049c6aa0114697506f2f6717e1c5a38f71dc40f78621d440f404815451043c fa91f8ea7d10ffa773ed1c591ed0215f42b98b1c763fd23db5df45e664688342 71353644d4012c4d79872e308445fb70b121b226b85b2a01cfa5589208cf6cd7 d18fdfc3a6f406846a8cdeffa127bffb9405b08711f578531fc002506fba701a 928e0db0088fae774640e7903fc78932ae86b3ad46996ebf271695c248105539 97c3f3f10874020e0999f5e88cbf6e33bbdc919ddc54715ffb8f68285cfb4890 37a0176519c77084790d182a341b7648e186e2e1a614314dea11c7e9d8b9dcda
	I1002 21:02:38.161400 2815477 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl stop --timeout=10 6e626b9db7e71cca13b7f0fa58c29712669c45287dde64cab1606f74ddd60435 5f049c6aa0114697506f2f6717e1c5a38f71dc40f78621d440f404815451043c fa91f8ea7d10ffa773ed1c591ed0215f42b98b1c763fd23db5df45e664688342 71353644d4012c4d79872e308445fb70b121b226b85b2a01cfa5589208cf6cd7 d18fdfc3a6f406846a8cdeffa127bffb9405b08711f578531fc002506fba701a 928e0db0088fae774640e7903fc78932ae86b3ad46996ebf271695c248105539 97c3f3f10874020e0999f5e88cbf6e33bbdc919ddc54715ffb8f68285cfb4890 37a0176519c77084790d182a341b7648e186e2e1a614314dea11c7e9d8b9dcda: (15.55821054s)
	I1002 21:02:38.161464 2815477 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 21:02:38.261721 2815477 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:02:38.269735 2815477 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  2 21:01 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct  2 21:01 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct  2 21:01 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct  2 21:01 /etc/kubernetes/scheduler.conf
	
	I1002 21:02:38.269802 2815477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 21:02:38.278019 2815477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 21:02:38.285918 2815477 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:02:38.285976 2815477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:02:38.293979 2815477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 21:02:38.301949 2815477 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:02:38.302011 2815477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:02:38.309539 2815477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 21:02:38.317865 2815477 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:02:38.317919 2815477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:02:38.325506 2815477 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:02:38.333806 2815477 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 21:02:38.382285 2815477 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 21:02:40.300082 2815477 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.917773068s)
	I1002 21:02:40.300140 2815477 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 21:02:40.535875 2815477 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 21:02:40.596445 2815477 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 21:02:40.674251 2815477 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:02:40.674318 2815477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:02:41.175273 2815477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:02:41.675211 2815477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:02:41.698203 2815477 api_server.go:72] duration metric: took 1.023953292s to wait for apiserver process to appear ...
	I1002 21:02:41.698218 2815477 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:02:41.698248 2815477 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 21:02:46.227320 2815477 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 21:02:46.227336 2815477 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 21:02:46.227353 2815477 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 21:02:46.268665 2815477 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 21:02:46.268680 2815477 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 21:02:46.699210 2815477 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 21:02:46.720610 2815477 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 21:02:46.720630 2815477 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 21:02:47.198789 2815477 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 21:02:47.207804 2815477 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 21:02:47.207833 2815477 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 21:02:47.698385 2815477 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 21:02:47.706580 2815477 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1002 21:02:47.720262 2815477 api_server.go:141] control plane version: v1.34.1
	I1002 21:02:47.720282 2815477 api_server.go:131] duration metric: took 6.022059105s to wait for apiserver health ...
	I1002 21:02:47.720290 2815477 cni.go:84] Creating CNI manager for ""
	I1002 21:02:47.720295 2815477 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 21:02:47.723550 2815477 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 21:02:47.726413 2815477 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 21:02:47.730489 2815477 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 21:02:47.730499 2815477 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 21:02:47.743852 2815477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 21:02:48.181360 2815477 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:02:48.185214 2815477 system_pods.go:59] 8 kube-system pods found
	I1002 21:02:48.185235 2815477 system_pods.go:61] "coredns-66bc5c9577-bswh9" [effda912-e3ee-4d9f-af34-8abe9a9d3659] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:02:48.185244 2815477 system_pods.go:61] "etcd-functional-029371" [0bf73d2f-a733-44ce-b06a-2fbb6abee9d8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:02:48.185249 2815477 system_pods.go:61] "kindnet-9zmhd" [2d6be820-35d6-4183-800b-2b4a0971e0bc] Running
	I1002 21:02:48.185254 2815477 system_pods.go:61] "kube-apiserver-functional-029371" [ae91cf8a-78d4-4bc8-bbd8-b08725d3faeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:02:48.185260 2815477 system_pods.go:61] "kube-controller-manager-functional-029371" [1e5748f7-147f-49f4-ba46-881bcca8f6c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:02:48.185264 2815477 system_pods.go:61] "kube-proxy-xd2gs" [9f8999eb-7efb-417d-9a06-398ee7234f0b] Running
	I1002 21:02:48.185270 2815477 system_pods.go:61] "kube-scheduler-functional-029371" [1974f0f4-e901-4694-a6eb-121fb450785f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:02:48.185274 2815477 system_pods.go:61] "storage-provisioner" [f02122ca-7ec7-49b6-a4fc-f334ffb1ff51] Running
	I1002 21:02:48.185278 2815477 system_pods.go:74] duration metric: took 3.909505ms to wait for pod list to return data ...
	I1002 21:02:48.185284 2815477 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:02:48.187883 2815477 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:02:48.187901 2815477 node_conditions.go:123] node cpu capacity is 2
	I1002 21:02:48.187911 2815477 node_conditions.go:105] duration metric: took 2.622944ms to run NodePressure ...
	I1002 21:02:48.187973 2815477 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 21:02:48.462975 2815477 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1002 21:02:48.467698 2815477 kubeadm.go:743] kubelet initialised
	I1002 21:02:48.467708 2815477 kubeadm.go:744] duration metric: took 4.721491ms waiting for restarted kubelet to initialise ...
	I1002 21:02:48.467722 2815477 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 21:02:48.480479 2815477 ops.go:34] apiserver oom_adj: -16
	I1002 21:02:48.480490 2815477 kubeadm.go:601] duration metric: took 25.929648904s to restartPrimaryControlPlane
	I1002 21:02:48.480498 2815477 kubeadm.go:402] duration metric: took 26.006606486s to StartCluster
	I1002 21:02:48.480512 2815477 settings.go:142] acquiring lock: {Name:mke92114e22bdbcff74119665eced9d6b9ac1b1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:02:48.480571 2815477 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-2783765/kubeconfig
	I1002 21:02:48.481163 2815477 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-2783765/kubeconfig: {Name:mkcf76851e68b723b0046b589af4cfa7ca9a3bdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:02:48.481372 2815477 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1002 21:02:48.481615 2815477 config.go:182] Loaded profile config "functional-029371": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 21:02:48.481655 2815477 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:02:48.481712 2815477 addons.go:69] Setting storage-provisioner=true in profile "functional-029371"
	I1002 21:02:48.481724 2815477 addons.go:238] Setting addon storage-provisioner=true in "functional-029371"
	W1002 21:02:48.481729 2815477 addons.go:247] addon storage-provisioner should already be in state true
	I1002 21:02:48.481747 2815477 host.go:66] Checking if "functional-029371" exists ...
	I1002 21:02:48.482158 2815477 cli_runner.go:164] Run: docker container inspect functional-029371 --format={{.State.Status}}
	I1002 21:02:48.483561 2815477 addons.go:69] Setting default-storageclass=true in profile "functional-029371"
	I1002 21:02:48.483578 2815477 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-029371"
	I1002 21:02:48.483892 2815477 cli_runner.go:164] Run: docker container inspect functional-029371 --format={{.State.Status}}
	I1002 21:02:48.484503 2815477 out.go:179] * Verifying Kubernetes components...
	I1002 21:02:48.488400 2815477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:02:48.512851 2815477 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:02:48.515887 2815477 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:02:48.515899 2815477 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:02:48.516019 2815477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-029371
	I1002 21:02:48.521341 2815477 addons.go:238] Setting addon default-storageclass=true in "functional-029371"
	W1002 21:02:48.521351 2815477 addons.go:247] addon default-storageclass should already be in state true
	I1002 21:02:48.521373 2815477 host.go:66] Checking if "functional-029371" exists ...
	I1002 21:02:48.521783 2815477 cli_runner.go:164] Run: docker container inspect functional-029371 --format={{.State.Status}}
	I1002 21:02:48.538897 2815477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36127 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/functional-029371/id_rsa Username:docker}
	I1002 21:02:48.567826 2815477 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:02:48.567842 2815477 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:02:48.567904 2815477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-029371
	I1002 21:02:48.600679 2815477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36127 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/functional-029371/id_rsa Username:docker}
	I1002 21:02:48.711735 2815477 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:02:48.759797 2815477 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:02:48.786435 2815477 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:02:49.590968 2815477 node_ready.go:35] waiting up to 6m0s for node "functional-029371" to be "Ready" ...
	I1002 21:02:49.615606 2815477 node_ready.go:49] node "functional-029371" is "Ready"
	I1002 21:02:49.615622 2815477 node_ready.go:38] duration metric: took 24.634917ms for node "functional-029371" to be "Ready" ...
	I1002 21:02:49.615634 2815477 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:02:49.615693 2815477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:02:49.629485 2815477 api_server.go:72] duration metric: took 1.148087915s to wait for apiserver process to appear ...
	I1002 21:02:49.629499 2815477 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:02:49.629516 2815477 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 21:02:49.637208 2815477 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 21:02:49.640155 2815477 addons.go:514] duration metric: took 1.158481935s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 21:02:49.656576 2815477 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1002 21:02:49.666241 2815477 api_server.go:141] control plane version: v1.34.1
	I1002 21:02:49.666257 2815477 api_server.go:131] duration metric: took 36.753865ms to wait for apiserver health ...
	I1002 21:02:49.666264 2815477 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:02:49.672996 2815477 system_pods.go:59] 8 kube-system pods found
	I1002 21:02:49.673015 2815477 system_pods.go:61] "coredns-66bc5c9577-bswh9" [effda912-e3ee-4d9f-af34-8abe9a9d3659] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:02:49.673022 2815477 system_pods.go:61] "etcd-functional-029371" [0bf73d2f-a733-44ce-b06a-2fbb6abee9d8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:02:49.673027 2815477 system_pods.go:61] "kindnet-9zmhd" [2d6be820-35d6-4183-800b-2b4a0971e0bc] Running
	I1002 21:02:49.673033 2815477 system_pods.go:61] "kube-apiserver-functional-029371" [ae91cf8a-78d4-4bc8-bbd8-b08725d3faeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:02:49.673042 2815477 system_pods.go:61] "kube-controller-manager-functional-029371" [1e5748f7-147f-49f4-ba46-881bcca8f6c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:02:49.673046 2815477 system_pods.go:61] "kube-proxy-xd2gs" [9f8999eb-7efb-417d-9a06-398ee7234f0b] Running
	I1002 21:02:49.673052 2815477 system_pods.go:61] "kube-scheduler-functional-029371" [1974f0f4-e901-4694-a6eb-121fb450785f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:02:49.673054 2815477 system_pods.go:61] "storage-provisioner" [f02122ca-7ec7-49b6-a4fc-f334ffb1ff51] Running
	I1002 21:02:49.673059 2815477 system_pods.go:74] duration metric: took 6.790629ms to wait for pod list to return data ...
	I1002 21:02:49.673066 2815477 default_sa.go:34] waiting for default service account to be created ...
	I1002 21:02:49.678854 2815477 default_sa.go:45] found service account: "default"
	I1002 21:02:49.678867 2815477 default_sa.go:55] duration metric: took 5.796577ms for default service account to be created ...
	I1002 21:02:49.678880 2815477 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 21:02:49.682663 2815477 system_pods.go:86] 8 kube-system pods found
	I1002 21:02:49.682693 2815477 system_pods.go:89] "coredns-66bc5c9577-bswh9" [effda912-e3ee-4d9f-af34-8abe9a9d3659] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:02:49.682701 2815477 system_pods.go:89] "etcd-functional-029371" [0bf73d2f-a733-44ce-b06a-2fbb6abee9d8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:02:49.682706 2815477 system_pods.go:89] "kindnet-9zmhd" [2d6be820-35d6-4183-800b-2b4a0971e0bc] Running
	I1002 21:02:49.682712 2815477 system_pods.go:89] "kube-apiserver-functional-029371" [ae91cf8a-78d4-4bc8-bbd8-b08725d3faeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:02:49.682717 2815477 system_pods.go:89] "kube-controller-manager-functional-029371" [1e5748f7-147f-49f4-ba46-881bcca8f6c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:02:49.682720 2815477 system_pods.go:89] "kube-proxy-xd2gs" [9f8999eb-7efb-417d-9a06-398ee7234f0b] Running
	I1002 21:02:49.682725 2815477 system_pods.go:89] "kube-scheduler-functional-029371" [1974f0f4-e901-4694-a6eb-121fb450785f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:02:49.682728 2815477 system_pods.go:89] "storage-provisioner" [f02122ca-7ec7-49b6-a4fc-f334ffb1ff51] Running
	I1002 21:02:49.682734 2815477 system_pods.go:126] duration metric: took 3.84973ms to wait for k8s-apps to be running ...
	I1002 21:02:49.682740 2815477 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 21:02:49.682815 2815477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:02:49.702778 2815477 system_svc.go:56] duration metric: took 20.015163ms WaitForService to wait for kubelet
	I1002 21:02:49.702796 2815477 kubeadm.go:586] duration metric: took 1.221402749s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:02:49.702813 2815477 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:02:49.705488 2815477 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:02:49.705503 2815477 node_conditions.go:123] node cpu capacity is 2
	I1002 21:02:49.705513 2815477 node_conditions.go:105] duration metric: took 2.695168ms to run NodePressure ...
	I1002 21:02:49.705525 2815477 start.go:241] waiting for startup goroutines ...
	I1002 21:02:49.705533 2815477 start.go:246] waiting for cluster config update ...
	I1002 21:02:49.705542 2815477 start.go:255] writing updated cluster config ...
	I1002 21:02:49.705853 2815477 ssh_runner.go:195] Run: rm -f paused
	I1002 21:02:49.709721 2815477 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:02:49.714686 2815477 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bswh9" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 21:02:51.720905 2815477 pod_ready.go:104] pod "coredns-66bc5c9577-bswh9" is not "Ready", error: <nil>
	W1002 21:02:54.220214 2815477 pod_ready.go:104] pod "coredns-66bc5c9577-bswh9" is not "Ready", error: <nil>
	W1002 21:02:56.721785 2815477 pod_ready.go:104] pod "coredns-66bc5c9577-bswh9" is not "Ready", error: <nil>
	I1002 21:02:58.219890 2815477 pod_ready.go:94] pod "coredns-66bc5c9577-bswh9" is "Ready"
	I1002 21:02:58.219904 2815477 pod_ready.go:86] duration metric: took 8.505203843s for pod "coredns-66bc5c9577-bswh9" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:02:58.222070 2815477 pod_ready.go:83] waiting for pod "etcd-functional-029371" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:02:58.226512 2815477 pod_ready.go:94] pod "etcd-functional-029371" is "Ready"
	I1002 21:02:58.226525 2815477 pod_ready.go:86] duration metric: took 4.443939ms for pod "etcd-functional-029371" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:02:58.228769 2815477 pod_ready.go:83] waiting for pod "kube-apiserver-functional-029371" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:02:58.233001 2815477 pod_ready.go:94] pod "kube-apiserver-functional-029371" is "Ready"
	I1002 21:02:58.233014 2815477 pod_ready.go:86] duration metric: took 4.234122ms for pod "kube-apiserver-functional-029371" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:02:58.235462 2815477 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-029371" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:02:58.417868 2815477 pod_ready.go:94] pod "kube-controller-manager-functional-029371" is "Ready"
	I1002 21:02:58.417882 2815477 pod_ready.go:86] duration metric: took 182.40712ms for pod "kube-controller-manager-functional-029371" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:02:58.617898 2815477 pod_ready.go:83] waiting for pod "kube-proxy-xd2gs" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:02:59.018324 2815477 pod_ready.go:94] pod "kube-proxy-xd2gs" is "Ready"
	I1002 21:02:59.018348 2815477 pod_ready.go:86] duration metric: took 400.427253ms for pod "kube-proxy-xd2gs" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:02:59.218528 2815477 pod_ready.go:83] waiting for pod "kube-scheduler-functional-029371" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:03:00.419083 2815477 pod_ready.go:94] pod "kube-scheduler-functional-029371" is "Ready"
	I1002 21:03:00.419100 2815477 pod_ready.go:86] duration metric: took 1.200558587s for pod "kube-scheduler-functional-029371" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:03:00.419111 2815477 pod_ready.go:40] duration metric: took 10.709369394s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:03:00.480084 2815477 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 21:03:00.483562 2815477 out.go:179] * Done! kubectl is now configured to use "functional-029371" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	62517abd539ac       1611cd07b61d5       3 seconds ago       Exited              mount-munger              0                   eb64f4b609a6b       busybox-mount                               default
	1a697c0c38a23       ce2d2cda2d858       19 seconds ago      Running             echo-server               0                   1b61ad624188c       hello-node-75c85bcc94-jvqz4                 default
	f7ec92ef7ee86       35f3cbee4fb77       10 minutes ago      Running             nginx                     0                   d2770ddcd54ff       nginx-svc                                   default
	e9301c91add10       ba04bb24b9575       10 minutes ago      Running             storage-provisioner       2                   d016164eeb92f       storage-provisioner                         kube-system
	4a78f66b8de9a       7eb2c6ff0c5a7       10 minutes ago      Running             kube-controller-manager   2                   f8f65514862b2       kube-controller-manager-functional-029371   kube-system
	ec86407873fe8       43911e833d64d       10 minutes ago      Running             kube-apiserver            0                   4bff9fa30870b       kube-apiserver-functional-029371            kube-system
	0dd8df4eab17a       b5f57ec6b9867       10 minutes ago      Running             kube-scheduler            1                   7bbfe7c234b3a       kube-scheduler-functional-029371            kube-system
	ff6176ec7ae2d       a1894772a478e       10 minutes ago      Running             etcd                      1                   f385ef2d71fec       etcd-functional-029371                      kube-system
	9363aff35a4ac       7eb2c6ff0c5a7       10 minutes ago      Exited              kube-controller-manager   1                   f8f65514862b2       kube-controller-manager-functional-029371   kube-system
	bb62981a90b2e       05baa95f5142d       10 minutes ago      Running             kube-proxy                1                   095dc989df9d3       kube-proxy-xd2gs                            kube-system
	9f4fa4e6cafcd       ba04bb24b9575       10 minutes ago      Exited              storage-provisioner       1                   d016164eeb92f       storage-provisioner                         kube-system
	e13a9218fb36c       138784d87c9c5       10 minutes ago      Running             coredns                   1                   28a525d91513d       coredns-66bc5c9577-bswh9                    kube-system
	c0544bb436a09       b1a8c6f707935       10 minutes ago      Running             kindnet-cni               1                   ebe0641167404       kindnet-9zmhd                               kube-system
	6e626b9db7e71       138784d87c9c5       11 minutes ago      Exited              coredns                   0                   28a525d91513d       coredns-66bc5c9577-bswh9                    kube-system
	fa91f8ea7d10f       b1a8c6f707935       12 minutes ago      Exited              kindnet-cni               0                   ebe0641167404       kindnet-9zmhd                               kube-system
	71353644d4012       05baa95f5142d       12 minutes ago      Exited              kube-proxy                0                   095dc989df9d3       kube-proxy-xd2gs                            kube-system
	97c3f3f108740       a1894772a478e       12 minutes ago      Exited              etcd                      0                   f385ef2d71fec       etcd-functional-029371                      kube-system
	37a0176519c77       b5f57ec6b9867       12 minutes ago      Exited              kube-scheduler            0                   7bbfe7c234b3a       kube-scheduler-functional-029371            kube-system
	
	
	==> containerd <==
	Oct 02 21:13:15 functional-029371 containerd[3583]: time="2025-10-02T21:13:15.132910758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-mount,Uid:fbad512e-5bd9-4710-9776-20f6e7bd3473,Namespace:default,Attempt:0,}"
	Oct 02 21:13:15 functional-029371 containerd[3583]: time="2025-10-02T21:13:15.239153530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-mount,Uid:fbad512e-5bd9-4710-9776-20f6e7bd3473,Namespace:default,Attempt:0,} returns sandbox id \"eb64f4b609a6b2431e0d04cacfa1f886159cf4d3365e7f776f7bd4037530aaff\""
	Oct 02 21:13:15 functional-029371 containerd[3583]: time="2025-10-02T21:13:15.241655085Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 02 21:13:17 functional-029371 containerd[3583]: time="2025-10-02T21:13:17.494102809Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Oct 02 21:13:17 functional-029371 containerd[3583]: time="2025-10-02T21:13:17.495956082Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937506"
	Oct 02 21:13:17 functional-029371 containerd[3583]: time="2025-10-02T21:13:17.498397623Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Oct 02 21:13:17 functional-029371 containerd[3583]: time="2025-10-02T21:13:17.502037436Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Oct 02 21:13:17 functional-029371 containerd[3583]: time="2025-10-02T21:13:17.502747905Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.260915587s"
	Oct 02 21:13:17 functional-029371 containerd[3583]: time="2025-10-02T21:13:17.502871583Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Oct 02 21:13:17 functional-029371 containerd[3583]: time="2025-10-02T21:13:17.509172773Z" level=info msg="CreateContainer within sandbox \"eb64f4b609a6b2431e0d04cacfa1f886159cf4d3365e7f776f7bd4037530aaff\" for container &ContainerMetadata{Name:mount-munger,Attempt:0,}"
	Oct 02 21:13:17 functional-029371 containerd[3583]: time="2025-10-02T21:13:17.531859772Z" level=info msg="CreateContainer within sandbox \"eb64f4b609a6b2431e0d04cacfa1f886159cf4d3365e7f776f7bd4037530aaff\" for &ContainerMetadata{Name:mount-munger,Attempt:0,} returns container id \"62517abd539ac304f96127e96a0abdc7c13e002fb58b18a9c2b13940f90130f7\""
	Oct 02 21:13:17 functional-029371 containerd[3583]: time="2025-10-02T21:13:17.532766051Z" level=info msg="StartContainer for \"62517abd539ac304f96127e96a0abdc7c13e002fb58b18a9c2b13940f90130f7\""
	Oct 02 21:13:17 functional-029371 containerd[3583]: time="2025-10-02T21:13:17.588202039Z" level=info msg="StartContainer for \"62517abd539ac304f96127e96a0abdc7c13e002fb58b18a9c2b13940f90130f7\" returns successfully"
	Oct 02 21:13:17 functional-029371 containerd[3583]: time="2025-10-02T21:13:17.598963010Z" level=info msg="received exit event container_id:\"62517abd539ac304f96127e96a0abdc7c13e002fb58b18a9c2b13940f90130f7\"  id:\"62517abd539ac304f96127e96a0abdc7c13e002fb58b18a9c2b13940f90130f7\"  pid:6909  exited_at:{seconds:1759439597  nanos:598512558}"
	Oct 02 21:13:17 functional-029371 containerd[3583]: time="2025-10-02T21:13:17.636972884Z" level=info msg="shim disconnected" id=62517abd539ac304f96127e96a0abdc7c13e002fb58b18a9c2b13940f90130f7 namespace=k8s.io
	Oct 02 21:13:17 functional-029371 containerd[3583]: time="2025-10-02T21:13:17.637012786Z" level=warning msg="cleaning up after shim disconnected" id=62517abd539ac304f96127e96a0abdc7c13e002fb58b18a9c2b13940f90130f7 namespace=k8s.io
	Oct 02 21:13:17 functional-029371 containerd[3583]: time="2025-10-02T21:13:17.637049054Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 02 21:13:19 functional-029371 containerd[3583]: time="2025-10-02T21:13:19.273479173Z" level=info msg="StopPodSandbox for \"eb64f4b609a6b2431e0d04cacfa1f886159cf4d3365e7f776f7bd4037530aaff\""
	Oct 02 21:13:19 functional-029371 containerd[3583]: time="2025-10-02T21:13:19.273582683Z" level=info msg="Container to stop \"62517abd539ac304f96127e96a0abdc7c13e002fb58b18a9c2b13940f90130f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Oct 02 21:13:19 functional-029371 containerd[3583]: time="2025-10-02T21:13:19.285262602Z" level=info msg="received exit event container_id:\"eb64f4b609a6b2431e0d04cacfa1f886159cf4d3365e7f776f7bd4037530aaff\"  id:\"eb64f4b609a6b2431e0d04cacfa1f886159cf4d3365e7f776f7bd4037530aaff\"  pid:6868  exit_status:137  exited_at:{seconds:1759439599  nanos:284960837}"
	Oct 02 21:13:19 functional-029371 containerd[3583]: time="2025-10-02T21:13:19.304884025Z" level=info msg="shim disconnected" id=eb64f4b609a6b2431e0d04cacfa1f886159cf4d3365e7f776f7bd4037530aaff namespace=k8s.io
	Oct 02 21:13:19 functional-029371 containerd[3583]: time="2025-10-02T21:13:19.304923386Z" level=warning msg="cleaning up after shim disconnected" id=eb64f4b609a6b2431e0d04cacfa1f886159cf4d3365e7f776f7bd4037530aaff namespace=k8s.io
	Oct 02 21:13:19 functional-029371 containerd[3583]: time="2025-10-02T21:13:19.304959423Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 02 21:13:19 functional-029371 containerd[3583]: time="2025-10-02T21:13:19.359366709Z" level=info msg="TearDown network for sandbox \"eb64f4b609a6b2431e0d04cacfa1f886159cf4d3365e7f776f7bd4037530aaff\" successfully"
	Oct 02 21:13:19 functional-029371 containerd[3583]: time="2025-10-02T21:13:19.359471384Z" level=info msg="StopPodSandbox for \"eb64f4b609a6b2431e0d04cacfa1f886159cf4d3365e7f776f7bd4037530aaff\" returns successfully"
	
	
	==> coredns [6e626b9db7e71cca13b7f0fa58c29712669c45287dde64cab1606f74ddd60435] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36470 - 16817 "HINFO IN 8350429670381813791.6003931427677546625. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021892735s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e13a9218fb36c96f900452fa4804b05d1af634f65dabde0e99e4745bf3bdd984] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51972 - 42555 "HINFO IN 2726958689615771147.4044054909872593520. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.046976422s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               functional-029371
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-029371
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=functional-029371
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T21_01_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 21:01:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-029371
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 21:13:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 21:13:19 +0000   Thu, 02 Oct 2025 21:01:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 21:13:19 +0000   Thu, 02 Oct 2025 21:01:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 21:13:19 +0000   Thu, 02 Oct 2025 21:01:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 21:13:19 +0000   Thu, 02 Oct 2025 21:02:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-029371
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 fd50d735b20e43169e671ed5ecbfe749
	  System UUID:                482999fa-369e-4d58-bd97-98172b118eff
	  Boot ID:                    ddea27b5-1bb4-4ff4-b6ce-678e2308ca3c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-jvqz4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  default                     hello-node-connect-7d85dfc575-hf52j          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-bswh9                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-029371                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-9zmhd                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-029371             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-029371    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-xd2gs                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-029371             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-029371 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-029371 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node functional-029371 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-029371 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-029371 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-029371 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                node-controller  Node functional-029371 event: Registered Node functional-029371 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-029371 status is now: NodeReady
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-029371 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-029371 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-029371 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node functional-029371 event: Registered Node functional-029371 in Controller
	
	
	==> dmesg <==
	[Oct 2 20:00] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
	[Oct 2 20:51] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [97c3f3f10874020e0999f5e88cbf6e33bbdc919ddc54715ffb8f68285cfb4890] <==
	{"level":"warn","ts":"2025-10-02T21:01:09.091121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:01:09.113696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:01:09.140304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:01:09.163581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:01:09.214029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:01:09.290338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38702","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T21:02:00.115325Z","caller":"traceutil/trace.go:172","msg":"trace[291594329] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"103.807712ms","start":"2025-10-02T21:02:00.011494Z","end":"2025-10-02T21:02:00.115302Z","steps":["trace[291594329] 'process raft request'  (duration: 103.662216ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T21:02:38.044223Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T21:02:38.044273Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-029371","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-02T21:02:38.044393Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T21:02:38.045901Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T21:02:38.047455Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T21:02:38.047522Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-02T21:02:38.047636Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-02T21:02:38.047654Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-02T21:02:38.047956Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T21:02:38.048012Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T21:02:38.048024Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-02T21:02:38.048106Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T21:02:38.048130Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T21:02:38.048140Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T21:02:38.050950Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-02T21:02:38.051086Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T21:02:38.051113Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-02T21:02:38.051121Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-029371","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [ff6176ec7ae2de2fdb8b2e8cbe1b6888a2b29bb1783765d18ed72f5fa5850090] <==
	{"level":"warn","ts":"2025-10-02T21:02:45.102413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.129886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.140808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.165726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.180102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.198523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.236131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.251965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.277163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.288844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.309935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.328810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.347740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.364690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.382356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.412787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.432090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.448087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.478001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.495710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.508017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.564423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38004","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T21:12:43.986090Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1031}
	{"level":"info","ts":"2025-10-02T21:12:44.008580Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1031,"took":"22.217035ms","hash":2016753172,"current-db-size-bytes":3067904,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1282048,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2025-10-02T21:12:44.008635Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2016753172,"revision":1031,"compact-revision":-1}
	
	
	==> kernel <==
	 21:13:21 up 16:55,  0 user,  load average: 0.21, 0.38, 1.77
	Linux functional-029371 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c0544bb436a0906cfd062760bdbcd21a2d29e77e585ae36ebb930aa43c485e98] <==
	I1002 21:11:18.712247       1 main.go:301] handling current node
	I1002 21:11:28.717883       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:11:28.717924       1 main.go:301] handling current node
	I1002 21:11:38.711361       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:11:38.711395       1 main.go:301] handling current node
	I1002 21:11:48.711751       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:11:48.711980       1 main.go:301] handling current node
	I1002 21:11:58.716468       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:11:58.716502       1 main.go:301] handling current node
	I1002 21:12:08.711432       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:12:08.711466       1 main.go:301] handling current node
	I1002 21:12:18.711752       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:12:18.711788       1 main.go:301] handling current node
	I1002 21:12:28.716151       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:12:28.716190       1 main.go:301] handling current node
	I1002 21:12:38.719447       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:12:38.719485       1 main.go:301] handling current node
	I1002 21:12:48.711627       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:12:48.711666       1 main.go:301] handling current node
	I1002 21:12:58.712900       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:12:58.712937       1 main.go:301] handling current node
	I1002 21:13:08.711760       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:13:08.711950       1 main.go:301] handling current node
	I1002 21:13:18.711432       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:13:18.711490       1 main.go:301] handling current node
	
	
	==> kindnet [fa91f8ea7d10ffa773ed1c591ed0215f42b98b1c763fd23db5df45e664688342] <==
	I1002 21:01:19.695785       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 21:01:19.696051       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1002 21:01:19.696173       1 main.go:148] setting mtu 1500 for CNI 
	I1002 21:01:19.696193       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 21:01:19.696203       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T21:01:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 21:01:19.891576       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 21:01:19.891800       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 21:01:19.891903       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 21:01:19.892752       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 21:01:49.891919       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 21:01:49.892931       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1002 21:01:49.892947       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 21:01:49.893282       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1002 21:01:51.493072       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 21:01:51.493164       1 metrics.go:72] Registering metrics
	I1002 21:01:51.493395       1 controller.go:711] "Syncing nftables rules"
	I1002 21:01:59.897194       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:01:59.897288       1 main.go:301] handling current node
	I1002 21:02:09.897832       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:02:09.897866       1 main.go:301] handling current node
	I1002 21:02:19.895132       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:02:19.895159       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ec86407873fe8df85e4887b5c5b2b21b30f5b2fe009c3928a9a2d4b98c874b5a] <==
	I1002 21:02:46.392841       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 21:02:46.394347       1 aggregator.go:171] initial CRD sync complete...
	I1002 21:02:46.394462       1 autoregister_controller.go:144] Starting autoregister controller
	I1002 21:02:46.394733       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 21:02:46.394833       1 cache.go:39] Caches are synced for autoregister controller
	I1002 21:02:46.394932       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 21:02:46.408635       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 21:02:46.408912       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 21:02:46.410234       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 21:02:46.418967       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 21:02:46.721100       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 21:02:47.095033       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1002 21:02:47.431828       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1002 21:02:47.433331       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 21:02:47.439034       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 21:02:48.174490       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 21:02:48.324212       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 21:02:48.433235       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 21:02:48.444842       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 21:02:50.077548       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 21:03:03.853124       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.101.168.209"}
	I1002 21:03:10.555108       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.142.42"}
	I1002 21:03:19.105685       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.101.24.217"}
	I1002 21:07:19.362832       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.228.9"}
	I1002 21:12:46.321891       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [4a78f66b8de9abe5c9ae735c1c02e72e3256c9e5545188d321dac91ce1606b57] <==
	I1002 21:02:49.700250       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 21:02:49.700565       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 21:02:49.701186       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 21:02:49.703927       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 21:02:49.707401       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 21:02:49.707597       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 21:02:49.710897       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 21:02:49.714320       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 21:02:49.714957       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 21:02:49.717140       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 21:02:49.719238       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:02:49.719500       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 21:02:49.719678       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 21:02:49.719541       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 21:02:49.720464       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1002 21:02:49.719445       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 21:02:49.719430       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1002 21:02:49.726201       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 21:02:49.731606       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 21:02:49.731872       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 21:02:49.742298       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 21:02:49.756078       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 21:02:49.756275       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 21:02:49.760350       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:02:49.763444       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-controller-manager [9363aff35a4acb1420657199acac0ca01f30c32a92243e6ea96ec31d175aae16] <==
	I1002 21:02:30.185628       1 serving.go:386] Generated self-signed cert in-memory
	I1002 21:02:31.435116       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1002 21:02:31.435148       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:02:31.436649       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1002 21:02:31.436969       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1002 21:02:31.437036       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1002 21:02:31.437053       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1002 21:02:41.438751       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [71353644d4012c4d79872e308445fb70b121b226b85b2a01cfa5589208cf6cd7] <==
	I1002 21:01:19.631691       1 server_linux.go:53] "Using iptables proxy"
	I1002 21:01:19.774390       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 21:01:19.875501       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 21:01:19.875540       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 21:01:19.876304       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 21:01:19.929962       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:01:19.930014       1 server_linux.go:132] "Using iptables Proxier"
	I1002 21:01:19.933989       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 21:01:19.934490       1 server.go:527] "Version info" version="v1.34.1"
	I1002 21:01:19.934650       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:01:19.938894       1 config.go:106] "Starting endpoint slice config controller"
	I1002 21:01:19.939089       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 21:01:19.939124       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 21:01:19.939238       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 21:01:19.939971       1 config.go:200] "Starting service config controller"
	I1002 21:01:19.940126       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 21:01:19.940238       1 config.go:309] "Starting node config controller"
	I1002 21:01:19.940333       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 21:01:20.043367       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 21:01:20.043410       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 21:01:20.043424       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 21:01:20.048713       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [bb62981a90b2e6919f84a4d9b34bbfb6dbeaf7ea0fca18ddd27c59c4cc7382b7] <==
	I1002 21:02:28.761611       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1002 21:02:28.762697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-029371&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 21:02:30.012349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-029371&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 21:02:31.803349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-029371&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 21:02:36.781864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-029371&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1002 21:02:48.363238       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 21:02:48.365233       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 21:02:48.365530       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 21:02:48.400401       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:02:48.400613       1 server_linux.go:132] "Using iptables Proxier"
	I1002 21:02:48.415578       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 21:02:48.416007       1 server.go:527] "Version info" version="v1.34.1"
	I1002 21:02:48.416157       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:02:48.418783       1 config.go:200] "Starting service config controller"
	I1002 21:02:48.418810       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 21:02:48.419572       1 config.go:106] "Starting endpoint slice config controller"
	I1002 21:02:48.419695       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 21:02:48.419816       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 21:02:48.419937       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 21:02:48.420888       1 config.go:309] "Starting node config controller"
	I1002 21:02:48.421046       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 21:02:48.421155       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 21:02:48.436399       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 21:02:48.523114       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 21:02:48.592161       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0dd8df4eab17a4a504ba75dcd53063299a3901716a3ee868366c80c5f68c65a9] <==
	I1002 21:02:43.746760       1 serving.go:386] Generated self-signed cert in-memory
	W1002 21:02:46.263780       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 21:02:46.263822       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 21:02:46.263834       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 21:02:46.264102       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 21:02:46.381416       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 21:02:46.381449       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:02:46.389679       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:02:46.390180       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:02:46.393786       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 21:02:46.394631       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 21:02:46.490354       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [37a0176519c77084790d182a341b7648e186e2e1a614314dea11c7e9d8b9dcda] <==
	E1002 21:01:10.505154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 21:01:10.505442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 21:01:10.505584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 21:01:10.511789       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 21:01:10.512019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 21:01:10.512128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 21:01:10.512226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 21:01:10.512321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 21:01:10.512551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 21:01:10.516026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 21:01:10.516183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 21:01:11.341487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 21:01:11.368611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 21:01:11.429159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 21:01:11.434896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 21:01:11.488406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 21:01:11.577725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 21:01:11.588312       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1002 21:01:13.557394       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:02:38.107216       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1002 21:02:38.107251       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1002 21:02:38.107270       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1002 21:02:38.107389       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:02:38.107422       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1002 21:02:38.107482       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 02 21:12:10 functional-029371 kubelet[4514]: E1002 21:12:10.670445    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d97caa8e-1329-4661-b54c-ddad7ae3095f"
	Oct 02 21:12:12 functional-029371 kubelet[4514]: E1002 21:12:12.669845    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hf52j" podUID="3f468c29-a57d-4a49-b576-7dfbb2cf1868"
	Oct 02 21:12:20 functional-029371 kubelet[4514]: E1002 21:12:20.669962    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-jvqz4" podUID="c6ee5f62-076c-459f-91bf-59a51539e968"
	Oct 02 21:12:25 functional-029371 kubelet[4514]: E1002 21:12:25.669925    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d97caa8e-1329-4661-b54c-ddad7ae3095f"
	Oct 02 21:12:27 functional-029371 kubelet[4514]: E1002 21:12:27.669494    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hf52j" podUID="3f468c29-a57d-4a49-b576-7dfbb2cf1868"
	Oct 02 21:12:35 functional-029371 kubelet[4514]: E1002 21:12:35.669867    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-jvqz4" podUID="c6ee5f62-076c-459f-91bf-59a51539e968"
	Oct 02 21:12:36 functional-029371 kubelet[4514]: E1002 21:12:36.672491    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d97caa8e-1329-4661-b54c-ddad7ae3095f"
	Oct 02 21:12:38 functional-029371 kubelet[4514]: E1002 21:12:38.669855    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hf52j" podUID="3f468c29-a57d-4a49-b576-7dfbb2cf1868"
	Oct 02 21:12:50 functional-029371 kubelet[4514]: E1002 21:12:50.670183    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-jvqz4" podUID="c6ee5f62-076c-459f-91bf-59a51539e968"
	Oct 02 21:12:51 functional-029371 kubelet[4514]: E1002 21:12:51.670277    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hf52j" podUID="3f468c29-a57d-4a49-b576-7dfbb2cf1868"
	Oct 02 21:12:51 functional-029371 kubelet[4514]: E1002 21:12:51.670425    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d97caa8e-1329-4661-b54c-ddad7ae3095f"
	Oct 02 21:13:03 functional-029371 kubelet[4514]: E1002 21:13:03.670260    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hf52j" podUID="3f468c29-a57d-4a49-b576-7dfbb2cf1868"
	Oct 02 21:13:03 functional-029371 kubelet[4514]: E1002 21:13:03.670654    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d97caa8e-1329-4661-b54c-ddad7ae3095f"
	Oct 02 21:13:14 functional-029371 kubelet[4514]: I1002 21:13:14.820404    4514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-node-75c85bcc94-jvqz4" podStartSLOduration=13.244570855 podStartE2EDuration="5m55.820385194s" podCreationTimestamp="2025-10-02 21:07:19 +0000 UTC" firstStartedPulling="2025-10-02 21:07:19.703879204 +0000 UTC m=+279.175795532" lastFinishedPulling="2025-10-02 21:13:02.279693543 +0000 UTC m=+621.751609871" observedRunningTime="2025-10-02 21:13:03.25404058 +0000 UTC m=+622.725956932" watchObservedRunningTime="2025-10-02 21:13:14.820385194 +0000 UTC m=+634.292301522"
	Oct 02 21:13:14 functional-029371 kubelet[4514]: I1002 21:13:14.921857    4514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/fbad512e-5bd9-4710-9776-20f6e7bd3473-test-volume\") pod \"busybox-mount\" (UID: \"fbad512e-5bd9-4710-9776-20f6e7bd3473\") " pod="default/busybox-mount"
	Oct 02 21:13:14 functional-029371 kubelet[4514]: I1002 21:13:14.921921    4514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kczvz\" (UniqueName: \"kubernetes.io/projected/fbad512e-5bd9-4710-9776-20f6e7bd3473-kube-api-access-kczvz\") pod \"busybox-mount\" (UID: \"fbad512e-5bd9-4710-9776-20f6e7bd3473\") " pod="default/busybox-mount"
	Oct 02 21:13:15 functional-029371 kubelet[4514]: E1002 21:13:15.670261    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d97caa8e-1329-4661-b54c-ddad7ae3095f"
	Oct 02 21:13:16 functional-029371 kubelet[4514]: E1002 21:13:16.672321    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hf52j" podUID="3f468c29-a57d-4a49-b576-7dfbb2cf1868"
	Oct 02 21:13:19 functional-029371 kubelet[4514]: I1002 21:13:19.455751    4514 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/fbad512e-5bd9-4710-9776-20f6e7bd3473-test-volume\") pod \"fbad512e-5bd9-4710-9776-20f6e7bd3473\" (UID: \"fbad512e-5bd9-4710-9776-20f6e7bd3473\") "
	Oct 02 21:13:19 functional-029371 kubelet[4514]: I1002 21:13:19.455830    4514 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kczvz\" (UniqueName: \"kubernetes.io/projected/fbad512e-5bd9-4710-9776-20f6e7bd3473-kube-api-access-kczvz\") pod \"fbad512e-5bd9-4710-9776-20f6e7bd3473\" (UID: \"fbad512e-5bd9-4710-9776-20f6e7bd3473\") "
	Oct 02 21:13:19 functional-029371 kubelet[4514]: I1002 21:13:19.456295    4514 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbad512e-5bd9-4710-9776-20f6e7bd3473-test-volume" (OuterVolumeSpecName: "test-volume") pod "fbad512e-5bd9-4710-9776-20f6e7bd3473" (UID: "fbad512e-5bd9-4710-9776-20f6e7bd3473"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 02 21:13:19 functional-029371 kubelet[4514]: I1002 21:13:19.461891    4514 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbad512e-5bd9-4710-9776-20f6e7bd3473-kube-api-access-kczvz" (OuterVolumeSpecName: "kube-api-access-kczvz") pod "fbad512e-5bd9-4710-9776-20f6e7bd3473" (UID: "fbad512e-5bd9-4710-9776-20f6e7bd3473"). InnerVolumeSpecName "kube-api-access-kczvz". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 02 21:13:19 functional-029371 kubelet[4514]: I1002 21:13:19.556765    4514 reconciler_common.go:299] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/fbad512e-5bd9-4710-9776-20f6e7bd3473-test-volume\") on node \"functional-029371\" DevicePath \"\""
	Oct 02 21:13:19 functional-029371 kubelet[4514]: I1002 21:13:19.556974    4514 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kczvz\" (UniqueName: \"kubernetes.io/projected/fbad512e-5bd9-4710-9776-20f6e7bd3473-kube-api-access-kczvz\") on node \"functional-029371\" DevicePath \"\""
	Oct 02 21:13:20 functional-029371 kubelet[4514]: I1002 21:13:20.276942    4514 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb64f4b609a6b2431e0d04cacfa1f886159cf4d3365e7f776f7bd4037530aaff"
	
	
	==> storage-provisioner [9f4fa4e6cafcdf15d3a652b129916916db3a35a6bba6315257415306d82081ac] <==
	I1002 21:02:28.534730       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 21:02:28.536501       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [e9301c91add10f7b8320a98341322365ab0397a2b58eb545f437ffcdcab5d2df] <==
	W1002 21:12:57.286524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:12:59.290078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:12:59.296688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:13:01.300748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:13:01.307023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:13:03.310913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:13:03.317515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:13:05.320201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:13:05.324698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:13:07.327660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:13:07.332565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:13:09.335161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:13:09.339816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:13:11.344439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:13:11.353541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:13:13.356954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:13:13.361331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:13:15.365198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:13:15.372371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:13:17.374765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:13:17.379057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:13:19.387428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:13:19.395578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:13:21.401529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:13:21.410399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-029371 -n functional-029371
helpers_test.go:269: (dbg) Run:  kubectl --context functional-029371 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-connect-7d85dfc575-hf52j sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-029371 describe pod busybox-mount hello-node-connect-7d85dfc575-hf52j sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-029371 describe pod busybox-mount hello-node-connect-7d85dfc575-hf52j sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-029371/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 21:13:14 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  containerd://62517abd539ac304f96127e96a0abdc7c13e002fb58b18a9c2b13940f90130f7
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 02 Oct 2025 21:13:17 +0000
	      Finished:     Thu, 02 Oct 2025 21:13:17 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kczvz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-kczvz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  8s    default-scheduler  Successfully assigned default/busybox-mount to functional-029371
	  Normal  Pulling    8s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     6s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.262s (2.262s including waiting). Image size: 1935750 bytes.
	  Normal  Created    6s    kubelet            Created container: mount-munger
	  Normal  Started    6s    kubelet            Started container mount-munger
	
	
	Name:             hello-node-connect-7d85dfc575-hf52j
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-029371/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 21:03:19 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xwrnm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xwrnm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-hf52j to functional-029371
	  Normal   Pulling    7m13s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m12s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m12s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m52s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m52s (x21 over 10m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-029371/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 21:03:16 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f9whq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-f9whq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/sp-pod to functional-029371
	  Warning  Failed     8m40s (x4 over 10m)   kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m16s (x5 over 10m)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     7m15s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m15s                 kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m59s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m44s (x21 over 10m)  kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (604.35s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (248.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [f02122ca-7ec7-49b6-a4fc-f334ffb1ff51] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003576098s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-029371 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-029371 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-029371 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-029371 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [d97caa8e-1329-4661-b54c-ddad7ae3095f] Pending
helpers_test.go:352: "sp-pod" [d97caa8e-1329-4661-b54c-ddad7ae3095f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-029371 -n functional-029371
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-10-02 21:07:16.551160614 +0000 UTC m=+901.316236965
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-029371 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-029371 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-029371/192.168.49.2
Start Time:       Thu, 02 Oct 2025 21:03:16 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:  10.244.0.5
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f9whq (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-f9whq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  4m                     default-scheduler  Successfully assigned default/sp-pod to functional-029371
Warning  Failed     2m33s (x4 over 3m59s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    69s (x5 over 4m)       kubelet            Pulling image "docker.io/nginx"
Warning  Failed     68s (x5 over 3m59s)    kubelet            Error: ErrImagePull
Warning  Failed     68s                    kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    14s (x14 over 3m59s)   kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     14s (x14 over 3m59s)   kubelet            Error: ImagePullBackOff
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-029371 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-029371 logs sp-pod -n default: exit status 1 (106.115869ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-029371 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-029371
helpers_test.go:243: (dbg) docker inspect functional-029371:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "090c5f703e0603ef3d534b06de534b9f38e45786405a99e53ae49aef5c3508b3",
	        "Created": "2025-10-02T21:00:51.978972474Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2811196,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:00:52.062744723Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/090c5f703e0603ef3d534b06de534b9f38e45786405a99e53ae49aef5c3508b3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/090c5f703e0603ef3d534b06de534b9f38e45786405a99e53ae49aef5c3508b3/hostname",
	        "HostsPath": "/var/lib/docker/containers/090c5f703e0603ef3d534b06de534b9f38e45786405a99e53ae49aef5c3508b3/hosts",
	        "LogPath": "/var/lib/docker/containers/090c5f703e0603ef3d534b06de534b9f38e45786405a99e53ae49aef5c3508b3/090c5f703e0603ef3d534b06de534b9f38e45786405a99e53ae49aef5c3508b3-json.log",
	        "Name": "/functional-029371",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-029371:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-029371",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "090c5f703e0603ef3d534b06de534b9f38e45786405a99e53ae49aef5c3508b3",
	                "LowerDir": "/var/lib/docker/overlay2/caf7df263035e1f28a1da9be1443cbf5d19bd61f80924c026053c54e47c04e30-init/diff:/var/lib/docker/overlay2/51331203fb22f22857c79ac4aca1f3d12d523fa3ef805f7f258c2d1849e728ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/caf7df263035e1f28a1da9be1443cbf5d19bd61f80924c026053c54e47c04e30/merged",
	                "UpperDir": "/var/lib/docker/overlay2/caf7df263035e1f28a1da9be1443cbf5d19bd61f80924c026053c54e47c04e30/diff",
	                "WorkDir": "/var/lib/docker/overlay2/caf7df263035e1f28a1da9be1443cbf5d19bd61f80924c026053c54e47c04e30/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-029371",
	                "Source": "/var/lib/docker/volumes/functional-029371/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-029371",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-029371",
	                "name.minikube.sigs.k8s.io": "functional-029371",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1fd369d7c675f494df1af8bbeb228ab303420ec6e440618440a08cd22840ddd9",
	            "SandboxKey": "/var/run/docker/netns/1fd369d7c675",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36127"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36128"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36131"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36129"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36130"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-029371": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:8a:b4:10:41:3b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "00e3563aa4808dcd5f3a224a2151deb754278db778c1a4a02e08e667b6d2949c",
	                    "EndpointID": "5ce2c0a1f336f8f0a42c5f4a14f366cc54ee230716ae07896a98b853c1146cb5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-029371",
	                        "090c5f703e06"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-029371 -n functional-029371
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-029371 logs -n 25: (1.452139439s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-029371 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                   │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:02 UTC │ 02 Oct 25 21:02 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 21:02 UTC │ 02 Oct 25 21:02 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                       │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 21:02 UTC │ 02 Oct 25 21:02 UTC │
	│ kubectl │ functional-029371 kubectl -- --context functional-029371 get pods                                                         │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:02 UTC │ 02 Oct 25 21:02 UTC │
	│ start   │ -p functional-029371 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                  │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:02 UTC │ 02 Oct 25 21:03 UTC │
	│ service │ invalid-svc -p functional-029371                                                                                          │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │                     │
	│ config  │ functional-029371 config unset cpus                                                                                       │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │ 02 Oct 25 21:03 UTC │
	│ cp      │ functional-029371 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                        │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │ 02 Oct 25 21:03 UTC │
	│ config  │ functional-029371 config get cpus                                                                                         │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │                     │
	│ config  │ functional-029371 config set cpus 2                                                                                       │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │ 02 Oct 25 21:03 UTC │
	│ config  │ functional-029371 config get cpus                                                                                         │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │ 02 Oct 25 21:03 UTC │
	│ config  │ functional-029371 config unset cpus                                                                                       │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │ 02 Oct 25 21:03 UTC │
	│ ssh     │ functional-029371 ssh -n functional-029371 sudo cat /home/docker/cp-test.txt                                              │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │ 02 Oct 25 21:03 UTC │
	│ config  │ functional-029371 config get cpus                                                                                         │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │                     │
	│ ssh     │ functional-029371 ssh echo hello                                                                                          │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │ 02 Oct 25 21:03 UTC │
	│ cp      │ functional-029371 cp functional-029371:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd315440799/001/cp-test.txt │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │ 02 Oct 25 21:03 UTC │
	│ ssh     │ functional-029371 ssh cat /etc/hostname                                                                                   │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │ 02 Oct 25 21:03 UTC │
	│ ssh     │ functional-029371 ssh -n functional-029371 sudo cat /home/docker/cp-test.txt                                              │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │ 02 Oct 25 21:03 UTC │
	│ tunnel  │ functional-029371 tunnel --alsologtostderr                                                                                │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │                     │
	│ tunnel  │ functional-029371 tunnel --alsologtostderr                                                                                │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │                     │
	│ cp      │ functional-029371 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                 │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │ 02 Oct 25 21:03 UTC │
	│ tunnel  │ functional-029371 tunnel --alsologtostderr                                                                                │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │                     │
	│ ssh     │ functional-029371 ssh -n functional-029371 sudo cat /tmp/does/not/exist/cp-test.txt                                       │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │ 02 Oct 25 21:03 UTC │
	│ addons  │ functional-029371 addons list                                                                                             │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │ 02 Oct 25 21:03 UTC │
	│ addons  │ functional-029371 addons list -o json                                                                                     │ functional-029371 │ jenkins │ v1.37.0 │ 02 Oct 25 21:03 UTC │ 02 Oct 25 21:03 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:02:18
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:02:18.327574 2815477 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:02:18.327693 2815477 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:02:18.327697 2815477 out.go:374] Setting ErrFile to fd 2...
	I1002 21:02:18.327701 2815477 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:02:18.327945 2815477 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-2783765/.minikube/bin
	I1002 21:02:18.328299 2815477 out.go:368] Setting JSON to false
	I1002 21:02:18.329233 2815477 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":60288,"bootTime":1759378651,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1002 21:02:18.329314 2815477 start.go:140] virtualization:  
	I1002 21:02:18.332900 2815477 out.go:179] * [functional-029371] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:02:18.335853 2815477 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:02:18.335921 2815477 notify.go:220] Checking for updates...
	I1002 21:02:18.341622 2815477 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:02:18.344437 2815477 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-2783765/kubeconfig
	I1002 21:02:18.347286 2815477 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-2783765/.minikube
	I1002 21:02:18.350025 2815477 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:02:18.352944 2815477 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:02:18.356234 2815477 config.go:182] Loaded profile config "functional-029371": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 21:02:18.356368 2815477 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:02:18.387224 2815477 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:02:18.387381 2815477 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:02:18.454864 2815477 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-02 21:02:18.444590801 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:02:18.454973 2815477 docker.go:318] overlay module found
	I1002 21:02:18.458051 2815477 out.go:179] * Using the docker driver based on existing profile
	I1002 21:02:18.460868 2815477 start.go:304] selected driver: docker
	I1002 21:02:18.460877 2815477 start.go:924] validating driver "docker" against &{Name:functional-029371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-029371 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:02:18.460998 2815477 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:02:18.461129 2815477 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:02:18.520415 2815477 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-02 21:02:18.51090881 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:02:18.520834 2815477 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:02:18.520854 2815477 cni.go:84] Creating CNI manager for ""
	I1002 21:02:18.520911 2815477 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 21:02:18.520952 2815477 start.go:348] cluster config:
	{Name:functional-029371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-029371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:02:18.525815 2815477 out.go:179] * Starting "functional-029371" primary control-plane node in "functional-029371" cluster
	I1002 21:02:18.528712 2815477 cache.go:123] Beginning downloading kic base image for docker with containerd
	I1002 21:02:18.531602 2815477 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:02:18.534475 2815477 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1002 21:02:18.534526 2815477 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-2783765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1002 21:02:18.534540 2815477 cache.go:58] Caching tarball of preloaded images
	I1002 21:02:18.534570 2815477 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:02:18.534639 2815477 preload.go:233] Found /home/jenkins/minikube-integration/21682-2783765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 21:02:18.534647 2815477 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1002 21:02:18.534762 2815477 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/config.json ...
	I1002 21:02:18.554621 2815477 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:02:18.554633 2815477 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:02:18.554661 2815477 cache.go:232] Successfully downloaded all kic artifacts
	I1002 21:02:18.554684 2815477 start.go:360] acquireMachinesLock for functional-029371: {Name:mk4a1a504d880be64e2f8361d5fd38b59990af37 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:02:18.554753 2815477 start.go:364] duration metric: took 48.197µs to acquireMachinesLock for "functional-029371"
	I1002 21:02:18.554775 2815477 start.go:96] Skipping create...Using existing machine configuration
	I1002 21:02:18.554786 2815477 fix.go:54] fixHost starting: 
	I1002 21:02:18.555045 2815477 cli_runner.go:164] Run: docker container inspect functional-029371 --format={{.State.Status}}
	I1002 21:02:18.572013 2815477 fix.go:112] recreateIfNeeded on functional-029371: state=Running err=<nil>
	W1002 21:02:18.572033 2815477 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 21:02:18.575372 2815477 out.go:252] * Updating the running docker "functional-029371" container ...
	I1002 21:02:18.575412 2815477 machine.go:93] provisionDockerMachine start ...
	I1002 21:02:18.575507 2815477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-029371
	I1002 21:02:18.592392 2815477 main.go:141] libmachine: Using SSH client type: native
	I1002 21:02:18.592713 2815477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36127 <nil> <nil>}
	I1002 21:02:18.592721 2815477 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:02:18.726945 2815477 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-029371
	
	I1002 21:02:18.726959 2815477 ubuntu.go:182] provisioning hostname "functional-029371"
	I1002 21:02:18.727021 2815477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-029371
	I1002 21:02:18.745596 2815477 main.go:141] libmachine: Using SSH client type: native
	I1002 21:02:18.745894 2815477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36127 <nil> <nil>}
	I1002 21:02:18.745903 2815477 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-029371 && echo "functional-029371" | sudo tee /etc/hostname
	I1002 21:02:18.893021 2815477 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-029371
	
	I1002 21:02:18.893086 2815477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-029371
	I1002 21:02:18.913820 2815477 main.go:141] libmachine: Using SSH client type: native
	I1002 21:02:18.914150 2815477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36127 <nil> <nil>}
	I1002 21:02:18.914168 2815477 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-029371' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-029371/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-029371' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:02:19.055644 2815477 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:02:19.055668 2815477 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-2783765/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-2783765/.minikube}
	I1002 21:02:19.055690 2815477 ubuntu.go:190] setting up certificates
	I1002 21:02:19.055707 2815477 provision.go:84] configureAuth start
	I1002 21:02:19.055790 2815477 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-029371
	I1002 21:02:19.073703 2815477 provision.go:143] copyHostCerts
	I1002 21:02:19.073759 2815477 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-2783765/.minikube/ca.pem, removing ...
	I1002 21:02:19.073776 2815477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-2783765/.minikube/ca.pem
	I1002 21:02:19.073845 2815477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-2783765/.minikube/ca.pem (1078 bytes)
	I1002 21:02:19.073938 2815477 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-2783765/.minikube/cert.pem, removing ...
	I1002 21:02:19.073942 2815477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-2783765/.minikube/cert.pem
	I1002 21:02:19.073961 2815477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-2783765/.minikube/cert.pem (1123 bytes)
	I1002 21:02:19.074009 2815477 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-2783765/.minikube/key.pem, removing ...
	I1002 21:02:19.074012 2815477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-2783765/.minikube/key.pem
	I1002 21:02:19.074029 2815477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-2783765/.minikube/key.pem (1675 bytes)
	I1002 21:02:19.074079 2815477 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca-key.pem org=jenkins.functional-029371 san=[127.0.0.1 192.168.49.2 functional-029371 localhost minikube]
	I1002 21:02:19.360043 2815477 provision.go:177] copyRemoteCerts
	I1002 21:02:19.360097 2815477 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:02:19.360140 2815477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-029371
	I1002 21:02:19.377771 2815477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36127 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/functional-029371/id_rsa Username:docker}
	I1002 21:02:19.475182 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:02:19.493095 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 21:02:19.511348 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 21:02:19.529739 2815477 provision.go:87] duration metric: took 474.02038ms to configureAuth
	I1002 21:02:19.529756 2815477 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:02:19.529968 2815477 config.go:182] Loaded profile config "functional-029371": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 21:02:19.529974 2815477 machine.go:96] duration metric: took 954.557056ms to provisionDockerMachine
	I1002 21:02:19.529981 2815477 start.go:293] postStartSetup for "functional-029371" (driver="docker")
	I1002 21:02:19.529989 2815477 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:02:19.530036 2815477 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:02:19.530074 2815477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-029371
	I1002 21:02:19.550170 2815477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36127 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/functional-029371/id_rsa Username:docker}
	I1002 21:02:19.647315 2815477 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:02:19.650603 2815477 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:02:19.650622 2815477 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:02:19.650630 2815477 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-2783765/.minikube/addons for local assets ...
	I1002 21:02:19.650680 2815477 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-2783765/.minikube/files for local assets ...
	I1002 21:02:19.650755 2815477 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-2783765/.minikube/files/etc/ssl/certs/27856302.pem -> 27856302.pem in /etc/ssl/certs
	I1002 21:02:19.650830 2815477 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-2783765/.minikube/files/etc/test/nested/copy/2785630/hosts -> hosts in /etc/test/nested/copy/2785630
	I1002 21:02:19.650876 2815477 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2785630
	I1002 21:02:19.658354 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/files/etc/ssl/certs/27856302.pem --> /etc/ssl/certs/27856302.pem (1708 bytes)
	I1002 21:02:19.677965 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/files/etc/test/nested/copy/2785630/hosts --> /etc/test/nested/copy/2785630/hosts (40 bytes)
	I1002 21:02:19.695964 2815477 start.go:296] duration metric: took 165.969249ms for postStartSetup
	I1002 21:02:19.696050 2815477 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:02:19.696087 2815477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-029371
	I1002 21:02:19.712594 2815477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36127 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/functional-029371/id_rsa Username:docker}
	I1002 21:02:19.805107 2815477 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:02:19.810194 2815477 fix.go:56] duration metric: took 1.255406029s for fixHost
	I1002 21:02:19.810209 2815477 start.go:83] releasing machines lock for "functional-029371", held for 1.255449099s
	I1002 21:02:19.810284 2815477 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-029371
	I1002 21:02:19.830377 2815477 ssh_runner.go:195] Run: cat /version.json
	I1002 21:02:19.830419 2815477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-029371
	I1002 21:02:19.830725 2815477 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:02:19.830772 2815477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-029371
	I1002 21:02:19.854224 2815477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36127 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/functional-029371/id_rsa Username:docker}
	I1002 21:02:19.856708 2815477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36127 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/functional-029371/id_rsa Username:docker}
	I1002 21:02:20.046165 2815477 ssh_runner.go:195] Run: systemctl --version
	I1002 21:02:20.054787 2815477 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:02:20.060565 2815477 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:02:20.060633 2815477 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:02:20.069479 2815477 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 21:02:20.069494 2815477 start.go:495] detecting cgroup driver to use...
	I1002 21:02:20.069525 2815477 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 21:02:20.069572 2815477 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1002 21:02:20.086314 2815477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 21:02:20.102278 2815477 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:02:20.102334 2815477 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:02:20.120181 2815477 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:02:20.135751 2815477 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:02:20.282482 2815477 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:02:20.435863 2815477 docker.go:234] disabling docker service ...
	I1002 21:02:20.435917 2815477 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:02:20.453580 2815477 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:02:20.467114 2815477 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:02:20.605677 2815477 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:02:20.748817 2815477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:02:20.762948 2815477 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:02:20.779110 2815477 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1002 21:02:20.789474 2815477 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 21:02:20.799194 2815477 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 21:02:20.799250 2815477 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 21:02:20.809723 2815477 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 21:02:20.819156 2815477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 21:02:20.828438 2815477 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 21:02:20.837630 2815477 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:02:20.845533 2815477 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 21:02:20.854493 2815477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1002 21:02:20.863163 2815477 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1002 21:02:20.872496 2815477 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:02:20.879706 2815477 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:02:20.887365 2815477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:02:21.029034 2815477 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 21:02:21.349477 2815477 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1002 21:02:21.349535 2815477 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1002 21:02:21.353746 2815477 start.go:563] Will wait 60s for crictl version
	I1002 21:02:21.353799 2815477 ssh_runner.go:195] Run: which crictl
	I1002 21:02:21.357762 2815477 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:02:21.384810 2815477 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.28
	RuntimeApiVersion:  v1
	I1002 21:02:21.384873 2815477 ssh_runner.go:195] Run: containerd --version
	I1002 21:02:21.409344 2815477 ssh_runner.go:195] Run: containerd --version
	I1002 21:02:21.438032 2815477 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
	I1002 21:02:21.441096 2815477 cli_runner.go:164] Run: docker network inspect functional-029371 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:02:21.457329 2815477 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 21:02:21.464511 2815477 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1002 21:02:21.467499 2815477 kubeadm.go:883] updating cluster {Name:functional-029371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-029371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:02:21.467622 2815477 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1002 21:02:21.467726 2815477 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:02:21.494454 2815477 containerd.go:627] all images are preloaded for containerd runtime.
	I1002 21:02:21.494466 2815477 containerd.go:534] Images already preloaded, skipping extraction
	I1002 21:02:21.494532 2815477 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:02:21.522744 2815477 containerd.go:627] all images are preloaded for containerd runtime.
	I1002 21:02:21.522756 2815477 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:02:21.522762 2815477 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 containerd true true} ...
	I1002 21:02:21.522857 2815477 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-029371 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-029371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:02:21.522923 2815477 ssh_runner.go:195] Run: sudo crictl info
	I1002 21:02:21.550642 2815477 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1002 21:02:21.550659 2815477 cni.go:84] Creating CNI manager for ""
	I1002 21:02:21.550667 2815477 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 21:02:21.550676 2815477 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:02:21.550699 2815477 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-029371 NodeName:functional-029371 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfi
gOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:02:21.550809 2815477 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-029371"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:02:21.550872 2815477 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:02:21.559173 2815477 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:02:21.559240 2815477 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:02:21.567410 2815477 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1002 21:02:21.581080 2815477 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:02:21.594319 2815477 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2080 bytes)
	I1002 21:02:21.607749 2815477 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:02:21.611840 2815477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:02:21.750717 2815477 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:02:21.764532 2815477 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371 for IP: 192.168.49.2
	I1002 21:02:21.764544 2815477 certs.go:195] generating shared ca certs ...
	I1002 21:02:21.764559 2815477 certs.go:227] acquiring lock for ca certs: {Name:mk9dd0ab4a99d312fca91f03b1dec8574d28a55e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:02:21.764715 2815477 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/ca.key
	I1002 21:02:21.764757 2815477 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/proxy-client-ca.key
	I1002 21:02:21.764763 2815477 certs.go:257] generating profile certs ...
	I1002 21:02:21.764842 2815477 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/client.key
	I1002 21:02:21.764886 2815477 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/apiserver.key.13d3535d
	I1002 21:02:21.764924 2815477 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/proxy-client.key
	I1002 21:02:21.765029 2815477 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/2785630.pem (1338 bytes)
	W1002 21:02:21.765053 2815477 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/2785630_empty.pem, impossibly tiny 0 bytes
	I1002 21:02:21.765060 2815477 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 21:02:21.765081 2815477 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:02:21.765100 2815477 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:02:21.765124 2815477 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/key.pem (1675 bytes)
	I1002 21:02:21.765167 2815477 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-2783765/.minikube/files/etc/ssl/certs/27856302.pem (1708 bytes)
	I1002 21:02:21.765738 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:02:21.787950 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 21:02:21.811445 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:02:21.831379 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 21:02:21.849949 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 21:02:21.868825 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:02:21.894520 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:02:21.912602 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 21:02:21.930598 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/certs/2785630.pem --> /usr/share/ca-certificates/2785630.pem (1338 bytes)
	I1002 21:02:21.949599 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/files/etc/ssl/certs/27856302.pem --> /usr/share/ca-certificates/27856302.pem (1708 bytes)
	I1002 21:02:21.968165 2815477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-2783765/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:02:21.985867 2815477 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:02:21.999642 2815477 ssh_runner.go:195] Run: openssl version
	I1002 21:02:22.008041 2815477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2785630.pem && ln -fs /usr/share/ca-certificates/2785630.pem /etc/ssl/certs/2785630.pem"
	I1002 21:02:22.018009 2815477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2785630.pem
	I1002 21:02:22.023044 2815477 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:00 /usr/share/ca-certificates/2785630.pem
	I1002 21:02:22.023105 2815477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2785630.pem
	I1002 21:02:22.080761 2815477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2785630.pem /etc/ssl/certs/51391683.0"
	I1002 21:02:22.089921 2815477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27856302.pem && ln -fs /usr/share/ca-certificates/27856302.pem /etc/ssl/certs/27856302.pem"
	I1002 21:02:22.101316 2815477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27856302.pem
	I1002 21:02:22.105417 2815477 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:00 /usr/share/ca-certificates/27856302.pem
	I1002 21:02:22.105474 2815477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27856302.pem
	I1002 21:02:22.146642 2815477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/27856302.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:02:22.155623 2815477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:02:22.164052 2815477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:02:22.167667 2815477 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:53 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:02:22.167721 2815477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:02:22.208766 2815477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:02:22.216829 2815477 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:02:22.220598 2815477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 21:02:22.261495 2815477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 21:02:22.304687 2815477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 21:02:22.346589 2815477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 21:02:22.389278 2815477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 21:02:22.432804 2815477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 21:02:22.473902 2815477 kubeadm.go:400] StartCluster: {Name:functional-029371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-029371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:02:22.473988 2815477 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1002 21:02:22.474053 2815477 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:02:22.513156 2815477 cri.go:89] found id: "6e626b9db7e71cca13b7f0fa58c29712669c45287dde64cab1606f74ddd60435"
	I1002 21:02:22.513167 2815477 cri.go:89] found id: "5f049c6aa0114697506f2f6717e1c5a38f71dc40f78621d440f404815451043c"
	I1002 21:02:22.513170 2815477 cri.go:89] found id: "fa91f8ea7d10ffa773ed1c591ed0215f42b98b1c763fd23db5df45e664688342"
	I1002 21:02:22.513177 2815477 cri.go:89] found id: "71353644d4012c4d79872e308445fb70b121b226b85b2a01cfa5589208cf6cd7"
	I1002 21:02:22.513179 2815477 cri.go:89] found id: "d18fdfc3a6f406846a8cdeffa127bffb9405b08711f578531fc002506fba701a"
	I1002 21:02:22.513182 2815477 cri.go:89] found id: "928e0db0088fae774640e7903fc78932ae86b3ad46996ebf271695c248105539"
	I1002 21:02:22.513184 2815477 cri.go:89] found id: "97c3f3f10874020e0999f5e88cbf6e33bbdc919ddc54715ffb8f68285cfb4890"
	I1002 21:02:22.513187 2815477 cri.go:89] found id: "37a0176519c77084790d182a341b7648e186e2e1a614314dea11c7e9d8b9dcda"
	I1002 21:02:22.513189 2815477 cri.go:89] found id: ""
	I1002 21:02:22.513244 2815477 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1002 21:02:22.541939 2815477 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"095dc989df9d352fb47a553ed491bcb75c5e4a1d143b880788ad02909ec3c9e9","pid":1728,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/095dc989df9d352fb47a553ed491bcb75c5e4a1d143b880788ad02909ec3c9e9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/095dc989df9d352fb47a553ed491bcb75c5e4a1d143b880788ad02909ec3c9e9/rootfs","created":"2025-10-02T21:01:19.281829803Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"095dc989df9d352fb47a553ed491bcb75c5e4a1d143b880788ad02909ec3c9e9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-xd2gs_9f8999eb-7efb-417d-9a06-398ee7234f0b","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-xd2gs","io.kubernetes.cri.sandbox-
namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9f8999eb-7efb-417d-9a06-398ee7234f0b"},"owner":"root"},{"ociVersion":"1.2.1","id":"28a525d91513d095353611917acb50b9d14fac9c66b4f813cfc45eee15ed39c1","pid":2091,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/28a525d91513d095353611917acb50b9d14fac9c66b4f813cfc45eee15ed39c1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/28a525d91513d095353611917acb50b9d14fac9c66b4f813cfc45eee15ed39c1/rootfs","created":"2025-10-02T21:02:00.69289394Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"28a525d91513d095353611917acb50b9d14fac9c66b4f813cfc45eee15ed39c1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-66bc5c9577-bswh9_effda912-e3ee-4d9f-af34-8abe9a9d3659","io.kubernetes.cri.sandbox-memory":"178257920","io.kube
rnetes.cri.sandbox-name":"coredns-66bc5c9577-bswh9","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"effda912-e3ee-4d9f-af34-8abe9a9d3659"},"owner":"root"},{"ociVersion":"1.2.1","id":"37a0176519c77084790d182a341b7648e186e2e1a614314dea11c7e9d8b9dcda","pid":1314,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/37a0176519c77084790d182a341b7648e186e2e1a614314dea11c7e9d8b9dcda","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/37a0176519c77084790d182a341b7648e186e2e1a614314dea11c7e9d8b9dcda/rootfs","created":"2025-10-02T21:01:06.242121573Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri.sandbox-id":"7bbfe7c234b3a898c2ddb8c8a97d591e5f88a1d679c44d92486d4defc9167052","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-029371","io.kubernetes.cri.sandbox-namespace":"kube-syst
em","io.kubernetes.cri.sandbox-uid":"d6601939fa1d9587e15055ca9ac3c312"},"owner":"root"},{"ociVersion":"1.2.1","id":"5f049c6aa0114697506f2f6717e1c5a38f71dc40f78621d440f404815451043c","pid":2125,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5f049c6aa0114697506f2f6717e1c5a38f71dc40f78621d440f404815451043c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5f049c6aa0114697506f2f6717e1c5a38f71dc40f78621d440f404815451043c/rootfs","created":"2025-10-02T21:02:00.753021077Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"d016164eeb92f9a704af715d3c123e7de84043633d9dc823690f2f6925faed45","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f02122ca-7ec7-49b6-a4fc-f334ffb1ff51"},"owner":"root"},{"ociVersion":"1
.2.1","id":"6e626b9db7e71cca13b7f0fa58c29712669c45287dde64cab1606f74ddd60435","pid":2167,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6e626b9db7e71cca13b7f0fa58c29712669c45287dde64cab1606f74ddd60435","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6e626b9db7e71cca13b7f0fa58c29712669c45287dde64cab1606f74ddd60435/rootfs","created":"2025-10-02T21:02:00.834467894Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.12.1","io.kubernetes.cri.sandbox-id":"28a525d91513d095353611917acb50b9d14fac9c66b4f813cfc45eee15ed39c1","io.kubernetes.cri.sandbox-name":"coredns-66bc5c9577-bswh9","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"effda912-e3ee-4d9f-af34-8abe9a9d3659"},"owner":"root"},{"ociVersion":"1.2.1","id":"71353644d4012c4d79872e308445fb70b121b226b85b2a01cfa5589208cf6cd7","pid":1783,"status":"running","bundl
e":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/71353644d4012c4d79872e308445fb70b121b226b85b2a01cfa5589208cf6cd7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/71353644d4012c4d79872e308445fb70b121b226b85b2a01cfa5589208cf6cd7/rootfs","created":"2025-10-02T21:01:19.507507185Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.34.1","io.kubernetes.cri.sandbox-id":"095dc989df9d352fb47a553ed491bcb75c5e4a1d143b880788ad02909ec3c9e9","io.kubernetes.cri.sandbox-name":"kube-proxy-xd2gs","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9f8999eb-7efb-417d-9a06-398ee7234f0b"},"owner":"root"},{"ociVersion":"1.2.1","id":"7bbfe7c234b3a898c2ddb8c8a97d591e5f88a1d679c44d92486d4defc9167052","pid":1164,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7bbfe7c234b3a898c2ddb8c8a97d591e5f88a1d679c44d92486d4defc9167052","
rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7bbfe7c234b3a898c2ddb8c8a97d591e5f88a1d679c44d92486d4defc9167052/rootfs","created":"2025-10-02T21:01:06.054376326Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"7bbfe7c234b3a898c2ddb8c8a97d591e5f88a1d679c44d92486d4defc9167052","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-029371_d6601939fa1d9587e15055ca9ac3c312","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-029371","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"d6601939fa1d9587e15055ca9ac3c312"},"owner":"root"},{"ociVersion":"1.2.1","id":"928e0db0088fae774640e7903fc78932ae86b3ad46996ebf271695c248105539","pid":1404,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2
.task/k8s.io/928e0db0088fae774640e7903fc78932ae86b3ad46996ebf271695c248105539","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/928e0db0088fae774640e7903fc78932ae86b3ad46996ebf271695c248105539/rootfs","created":"2025-10-02T21:01:06.367174341Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.1","io.kubernetes.cri.sandbox-id":"b59d311db6e5ba70c65d985ff36bfa51b9dbd0dc4dfc8ac8d2873fc8df2afaf5","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-029371","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"eb13f944745743fe45a252f830c55d2d"},"owner":"root"},{"ociVersion":"1.2.1","id":"97c3f3f10874020e0999f5e88cbf6e33bbdc919ddc54715ffb8f68285cfb4890","pid":1341,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/97c3f3f10874020e0999f5e88cbf6e33bbdc919ddc54715ffb8f68285cfb4890","rootfs":"/run/containerd
/io.containerd.runtime.v2.task/k8s.io/97c3f3f10874020e0999f5e88cbf6e33bbdc919ddc54715ffb8f68285cfb4890/rootfs","created":"2025-10-02T21:01:06.278743308Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"f385ef2d71fec71d4f8e6559453a98e806da7a8c7644b0214fc3fe769cb8e57c","io.kubernetes.cri.sandbox-name":"etcd-functional-029371","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f863e7803f44c7150b86910fae3132d1"},"owner":"root"},{"ociVersion":"1.2.1","id":"b59d311db6e5ba70c65d985ff36bfa51b9dbd0dc4dfc8ac8d2873fc8df2afaf5","pid":1248,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b59d311db6e5ba70c65d985ff36bfa51b9dbd0dc4dfc8ac8d2873fc8df2afaf5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b59d311db6e5ba70c65d985ff36bfa51b9dbd0dc4dfc8ac8d2873fc8df2afaf5/rootfs","created":"2025-10-02T2
1:01:06.150511745Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"b59d311db6e5ba70c65d985ff36bfa51b9dbd0dc4dfc8ac8d2873fc8df2afaf5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-029371_eb13f944745743fe45a252f830c55d2d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-029371","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"eb13f944745743fe45a252f830c55d2d"},"owner":"root"},{"ociVersion":"1.2.1","id":"d016164eeb92f9a704af715d3c123e7de84043633d9dc823690f2f6925faed45","pid":2029,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d016164eeb92f9a704af715d3c123e7de84043633d9dc823690f2f6925faed45","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d016164eeb92f9a7
04af715d3c123e7de84043633d9dc823690f2f6925faed45/rootfs","created":"2025-10-02T21:02:00.633606611Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"d016164eeb92f9a704af715d3c123e7de84043633d9dc823690f2f6925faed45","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_f02122ca-7ec7-49b6-a4fc-f334ffb1ff51","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f02122ca-7ec7-49b6-a4fc-f334ffb1ff51"},"owner":"root"},{"ociVersion":"1.2.1","id":"d18fdfc3a6f406846a8cdeffa127bffb9405b08711f578531fc002506fba701a","pid":1415,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d18fdfc3a6f406846a8cdeffa127bffb9405b08711f578531fc002506fba701a","rootfs":"/run/cont
ainerd/io.containerd.runtime.v2.task/k8s.io/d18fdfc3a6f406846a8cdeffa127bffb9405b08711f578531fc002506fba701a/rootfs","created":"2025-10-02T21:01:06.394823582Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri.sandbox-id":"f8f65514862b2f4f45b9907e92c9331a8fcc3d3b84cc4be98d04604b846c0a3f","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-029371","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0bd369b26fb29618a00350f05f199620"},"owner":"root"},{"ociVersion":"1.2.1","id":"ebe0641167404545fd9dd5edf0b199e21f9a078f621b762a8223e54ea012cdde","pid":1700,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ebe0641167404545fd9dd5edf0b199e21f9a078f621b762a8223e54ea012cdde","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ebe0641167404545fd9dd5edf0b199e21
f9a078f621b762a8223e54ea012cdde/rootfs","created":"2025-10-02T21:01:19.256501217Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ebe0641167404545fd9dd5edf0b199e21f9a078f621b762a8223e54ea012cdde","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-9zmhd_2d6be820-35d6-4183-800b-2b4a0971e0bc","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-9zmhd","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2d6be820-35d6-4183-800b-2b4a0971e0bc"},"owner":"root"},{"ociVersion":"1.2.1","id":"f385ef2d71fec71d4f8e6559453a98e806da7a8c7644b0214fc3fe769cb8e57c","pid":1206,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f385ef2d71fec71d4f8e6559453a98e806da7a8c7644b0214fc3fe769cb8e57c","rootfs":"/run/containerd/io.contai
nerd.runtime.v2.task/k8s.io/f385ef2d71fec71d4f8e6559453a98e806da7a8c7644b0214fc3fe769cb8e57c/rootfs","created":"2025-10-02T21:01:06.097545635Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"f385ef2d71fec71d4f8e6559453a98e806da7a8c7644b0214fc3fe769cb8e57c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-029371_f863e7803f44c7150b86910fae3132d1","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-functional-029371","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f863e7803f44c7150b86910fae3132d1"},"owner":"root"},{"ociVersion":"1.2.1","id":"f8f65514862b2f4f45b9907e92c9331a8fcc3d3b84cc4be98d04604b846c0a3f","pid":1273,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f8f65514862b2f4f45b9907e92c9331a8fcc3d3b8
4cc4be98d04604b846c0a3f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f8f65514862b2f4f45b9907e92c9331a8fcc3d3b84cc4be98d04604b846c0a3f/rootfs","created":"2025-10-02T21:01:06.171730182Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"f8f65514862b2f4f45b9907e92c9331a8fcc3d3b84cc4be98d04604b846c0a3f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-029371_0bd369b26fb29618a00350f05f199620","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-029371","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0bd369b26fb29618a00350f05f199620"},"owner":"root"},{"ociVersion":"1.2.1","id":"fa91f8ea7d10ffa773ed1c591ed0215f42b98b1c763fd23db5df45e664688342","pid":1781,"status":"running","bundl
e":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa91f8ea7d10ffa773ed1c591ed0215f42b98b1c763fd23db5df45e664688342","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa91f8ea7d10ffa773ed1c591ed0215f42b98b1c763fd23db5df45e664688342/rootfs","created":"2025-10-02T21:01:19.500303822Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20250512-df8de77b","io.kubernetes.cri.sandbox-id":"ebe0641167404545fd9dd5edf0b199e21f9a078f621b762a8223e54ea012cdde","io.kubernetes.cri.sandbox-name":"kindnet-9zmhd","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2d6be820-35d6-4183-800b-2b4a0971e0bc"},"owner":"root"}]
	I1002 21:02:22.542232 2815477 cri.go:126] list returned 16 containers
	I1002 21:02:22.542240 2815477 cri.go:129] container: {ID:095dc989df9d352fb47a553ed491bcb75c5e4a1d143b880788ad02909ec3c9e9 Status:running}
	I1002 21:02:22.542260 2815477 cri.go:131] skipping 095dc989df9d352fb47a553ed491bcb75c5e4a1d143b880788ad02909ec3c9e9 - not in ps
	I1002 21:02:22.542264 2815477 cri.go:129] container: {ID:28a525d91513d095353611917acb50b9d14fac9c66b4f813cfc45eee15ed39c1 Status:running}
	I1002 21:02:22.542269 2815477 cri.go:131] skipping 28a525d91513d095353611917acb50b9d14fac9c66b4f813cfc45eee15ed39c1 - not in ps
	I1002 21:02:22.542271 2815477 cri.go:129] container: {ID:37a0176519c77084790d182a341b7648e186e2e1a614314dea11c7e9d8b9dcda Status:running}
	I1002 21:02:22.542277 2815477 cri.go:135] skipping {37a0176519c77084790d182a341b7648e186e2e1a614314dea11c7e9d8b9dcda running}: state = "running", want "paused"
	I1002 21:02:22.542284 2815477 cri.go:129] container: {ID:5f049c6aa0114697506f2f6717e1c5a38f71dc40f78621d440f404815451043c Status:running}
	I1002 21:02:22.542289 2815477 cri.go:135] skipping {5f049c6aa0114697506f2f6717e1c5a38f71dc40f78621d440f404815451043c running}: state = "running", want "paused"
	I1002 21:02:22.542295 2815477 cri.go:129] container: {ID:6e626b9db7e71cca13b7f0fa58c29712669c45287dde64cab1606f74ddd60435 Status:running}
	I1002 21:02:22.542300 2815477 cri.go:135] skipping {6e626b9db7e71cca13b7f0fa58c29712669c45287dde64cab1606f74ddd60435 running}: state = "running", want "paused"
	I1002 21:02:22.542304 2815477 cri.go:129] container: {ID:71353644d4012c4d79872e308445fb70b121b226b85b2a01cfa5589208cf6cd7 Status:running}
	I1002 21:02:22.542308 2815477 cri.go:135] skipping {71353644d4012c4d79872e308445fb70b121b226b85b2a01cfa5589208cf6cd7 running}: state = "running", want "paused"
	I1002 21:02:22.542312 2815477 cri.go:129] container: {ID:7bbfe7c234b3a898c2ddb8c8a97d591e5f88a1d679c44d92486d4defc9167052 Status:running}
	I1002 21:02:22.542317 2815477 cri.go:131] skipping 7bbfe7c234b3a898c2ddb8c8a97d591e5f88a1d679c44d92486d4defc9167052 - not in ps
	I1002 21:02:22.542320 2815477 cri.go:129] container: {ID:928e0db0088fae774640e7903fc78932ae86b3ad46996ebf271695c248105539 Status:running}
	I1002 21:02:22.542325 2815477 cri.go:135] skipping {928e0db0088fae774640e7903fc78932ae86b3ad46996ebf271695c248105539 running}: state = "running", want "paused"
	I1002 21:02:22.542329 2815477 cri.go:129] container: {ID:97c3f3f10874020e0999f5e88cbf6e33bbdc919ddc54715ffb8f68285cfb4890 Status:running}
	I1002 21:02:22.542336 2815477 cri.go:135] skipping {97c3f3f10874020e0999f5e88cbf6e33bbdc919ddc54715ffb8f68285cfb4890 running}: state = "running", want "paused"
	I1002 21:02:22.542340 2815477 cri.go:129] container: {ID:b59d311db6e5ba70c65d985ff36bfa51b9dbd0dc4dfc8ac8d2873fc8df2afaf5 Status:running}
	I1002 21:02:22.542344 2815477 cri.go:131] skipping b59d311db6e5ba70c65d985ff36bfa51b9dbd0dc4dfc8ac8d2873fc8df2afaf5 - not in ps
	I1002 21:02:22.542348 2815477 cri.go:129] container: {ID:d016164eeb92f9a704af715d3c123e7de84043633d9dc823690f2f6925faed45 Status:running}
	I1002 21:02:22.542352 2815477 cri.go:131] skipping d016164eeb92f9a704af715d3c123e7de84043633d9dc823690f2f6925faed45 - not in ps
	I1002 21:02:22.542356 2815477 cri.go:129] container: {ID:d18fdfc3a6f406846a8cdeffa127bffb9405b08711f578531fc002506fba701a Status:running}
	I1002 21:02:22.542361 2815477 cri.go:135] skipping {d18fdfc3a6f406846a8cdeffa127bffb9405b08711f578531fc002506fba701a running}: state = "running", want "paused"
	I1002 21:02:22.542366 2815477 cri.go:129] container: {ID:ebe0641167404545fd9dd5edf0b199e21f9a078f621b762a8223e54ea012cdde Status:running}
	I1002 21:02:22.542370 2815477 cri.go:131] skipping ebe0641167404545fd9dd5edf0b199e21f9a078f621b762a8223e54ea012cdde - not in ps
	I1002 21:02:22.542372 2815477 cri.go:129] container: {ID:f385ef2d71fec71d4f8e6559453a98e806da7a8c7644b0214fc3fe769cb8e57c Status:running}
	I1002 21:02:22.542377 2815477 cri.go:131] skipping f385ef2d71fec71d4f8e6559453a98e806da7a8c7644b0214fc3fe769cb8e57c - not in ps
	I1002 21:02:22.542380 2815477 cri.go:129] container: {ID:f8f65514862b2f4f45b9907e92c9331a8fcc3d3b84cc4be98d04604b846c0a3f Status:running}
	I1002 21:02:22.542384 2815477 cri.go:131] skipping f8f65514862b2f4f45b9907e92c9331a8fcc3d3b84cc4be98d04604b846c0a3f - not in ps
	I1002 21:02:22.542386 2815477 cri.go:129] container: {ID:fa91f8ea7d10ffa773ed1c591ed0215f42b98b1c763fd23db5df45e664688342 Status:running}
	I1002 21:02:22.542393 2815477 cri.go:135] skipping {fa91f8ea7d10ffa773ed1c591ed0215f42b98b1c763fd23db5df45e664688342 running}: state = "running", want "paused"
	I1002 21:02:22.542448 2815477 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:02:22.550817 2815477 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 21:02:22.550836 2815477 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 21:02:22.550886 2815477 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 21:02:22.558336 2815477 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:02:22.558844 2815477 kubeconfig.go:125] found "functional-029371" server: "https://192.168.49.2:8441"
	I1002 21:02:22.560196 2815477 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 21:02:22.569716 2815477 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-02 21:01:00.929404548 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-02 21:02:21.602771014 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1002 21:02:22.569725 2815477 kubeadm.go:1160] stopping kube-system containers ...
	I1002 21:02:22.569736 2815477 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1002 21:02:22.569793 2815477 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:02:22.598953 2815477 cri.go:89] found id: "6e626b9db7e71cca13b7f0fa58c29712669c45287dde64cab1606f74ddd60435"
	I1002 21:02:22.598965 2815477 cri.go:89] found id: "5f049c6aa0114697506f2f6717e1c5a38f71dc40f78621d440f404815451043c"
	I1002 21:02:22.598968 2815477 cri.go:89] found id: "fa91f8ea7d10ffa773ed1c591ed0215f42b98b1c763fd23db5df45e664688342"
	I1002 21:02:22.598971 2815477 cri.go:89] found id: "71353644d4012c4d79872e308445fb70b121b226b85b2a01cfa5589208cf6cd7"
	I1002 21:02:22.598979 2815477 cri.go:89] found id: "d18fdfc3a6f406846a8cdeffa127bffb9405b08711f578531fc002506fba701a"
	I1002 21:02:22.598982 2815477 cri.go:89] found id: "928e0db0088fae774640e7903fc78932ae86b3ad46996ebf271695c248105539"
	I1002 21:02:22.598985 2815477 cri.go:89] found id: "97c3f3f10874020e0999f5e88cbf6e33bbdc919ddc54715ffb8f68285cfb4890"
	I1002 21:02:22.598987 2815477 cri.go:89] found id: "37a0176519c77084790d182a341b7648e186e2e1a614314dea11c7e9d8b9dcda"
	I1002 21:02:22.598990 2815477 cri.go:89] found id: ""
	I1002 21:02:22.598994 2815477 cri.go:252] Stopping containers: [6e626b9db7e71cca13b7f0fa58c29712669c45287dde64cab1606f74ddd60435 5f049c6aa0114697506f2f6717e1c5a38f71dc40f78621d440f404815451043c fa91f8ea7d10ffa773ed1c591ed0215f42b98b1c763fd23db5df45e664688342 71353644d4012c4d79872e308445fb70b121b226b85b2a01cfa5589208cf6cd7 d18fdfc3a6f406846a8cdeffa127bffb9405b08711f578531fc002506fba701a 928e0db0088fae774640e7903fc78932ae86b3ad46996ebf271695c248105539 97c3f3f10874020e0999f5e88cbf6e33bbdc919ddc54715ffb8f68285cfb4890 37a0176519c77084790d182a341b7648e186e2e1a614314dea11c7e9d8b9dcda]
	I1002 21:02:22.599061 2815477 ssh_runner.go:195] Run: which crictl
	I1002 21:02:22.603148 2815477 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 6e626b9db7e71cca13b7f0fa58c29712669c45287dde64cab1606f74ddd60435 5f049c6aa0114697506f2f6717e1c5a38f71dc40f78621d440f404815451043c fa91f8ea7d10ffa773ed1c591ed0215f42b98b1c763fd23db5df45e664688342 71353644d4012c4d79872e308445fb70b121b226b85b2a01cfa5589208cf6cd7 d18fdfc3a6f406846a8cdeffa127bffb9405b08711f578531fc002506fba701a 928e0db0088fae774640e7903fc78932ae86b3ad46996ebf271695c248105539 97c3f3f10874020e0999f5e88cbf6e33bbdc919ddc54715ffb8f68285cfb4890 37a0176519c77084790d182a341b7648e186e2e1a614314dea11c7e9d8b9dcda
	I1002 21:02:38.161400 2815477 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl stop --timeout=10 6e626b9db7e71cca13b7f0fa58c29712669c45287dde64cab1606f74ddd60435 5f049c6aa0114697506f2f6717e1c5a38f71dc40f78621d440f404815451043c fa91f8ea7d10ffa773ed1c591ed0215f42b98b1c763fd23db5df45e664688342 71353644d4012c4d79872e308445fb70b121b226b85b2a01cfa5589208cf6cd7 d18fdfc3a6f406846a8cdeffa127bffb9405b08711f578531fc002506fba701a 928e0db0088fae774640e7903fc78932ae86b3ad46996ebf271695c248105539 97c3f3f10874020e0999f5e88cbf6e33bbdc919ddc54715ffb8f68285cfb4890 37a0176519c77084790d182a341b7648e186e2e1a614314dea11c7e9d8b9dcda: (15.55821054s)
	I1002 21:02:38.161464 2815477 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 21:02:38.261721 2815477 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:02:38.269735 2815477 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  2 21:01 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct  2 21:01 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct  2 21:01 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct  2 21:01 /etc/kubernetes/scheduler.conf
	
	I1002 21:02:38.269802 2815477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 21:02:38.278019 2815477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 21:02:38.285918 2815477 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:02:38.285976 2815477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:02:38.293979 2815477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 21:02:38.301949 2815477 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:02:38.302011 2815477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:02:38.309539 2815477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 21:02:38.317865 2815477 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:02:38.317919 2815477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:02:38.325506 2815477 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:02:38.333806 2815477 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 21:02:38.382285 2815477 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 21:02:40.300082 2815477 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.917773068s)
	I1002 21:02:40.300140 2815477 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 21:02:40.535875 2815477 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 21:02:40.596445 2815477 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 21:02:40.674251 2815477 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:02:40.674318 2815477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:02:41.175273 2815477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:02:41.675211 2815477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:02:41.698203 2815477 api_server.go:72] duration metric: took 1.023953292s to wait for apiserver process to appear ...
	I1002 21:02:41.698218 2815477 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:02:41.698248 2815477 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 21:02:46.227320 2815477 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 21:02:46.227336 2815477 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 21:02:46.227353 2815477 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 21:02:46.268665 2815477 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 21:02:46.268680 2815477 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 21:02:46.699210 2815477 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 21:02:46.720610 2815477 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 21:02:46.720630 2815477 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 21:02:47.198789 2815477 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 21:02:47.207804 2815477 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 21:02:47.207833 2815477 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 21:02:47.698385 2815477 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 21:02:47.706580 2815477 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1002 21:02:47.720262 2815477 api_server.go:141] control plane version: v1.34.1
	I1002 21:02:47.720282 2815477 api_server.go:131] duration metric: took 6.022059105s to wait for apiserver health ...
	I1002 21:02:47.720290 2815477 cni.go:84] Creating CNI manager for ""
	I1002 21:02:47.720295 2815477 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 21:02:47.723550 2815477 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 21:02:47.726413 2815477 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 21:02:47.730489 2815477 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 21:02:47.730499 2815477 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 21:02:47.743852 2815477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 21:02:48.181360 2815477 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:02:48.185214 2815477 system_pods.go:59] 8 kube-system pods found
	I1002 21:02:48.185235 2815477 system_pods.go:61] "coredns-66bc5c9577-bswh9" [effda912-e3ee-4d9f-af34-8abe9a9d3659] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:02:48.185244 2815477 system_pods.go:61] "etcd-functional-029371" [0bf73d2f-a733-44ce-b06a-2fbb6abee9d8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:02:48.185249 2815477 system_pods.go:61] "kindnet-9zmhd" [2d6be820-35d6-4183-800b-2b4a0971e0bc] Running
	I1002 21:02:48.185254 2815477 system_pods.go:61] "kube-apiserver-functional-029371" [ae91cf8a-78d4-4bc8-bbd8-b08725d3faeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:02:48.185260 2815477 system_pods.go:61] "kube-controller-manager-functional-029371" [1e5748f7-147f-49f4-ba46-881bcca8f6c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:02:48.185264 2815477 system_pods.go:61] "kube-proxy-xd2gs" [9f8999eb-7efb-417d-9a06-398ee7234f0b] Running
	I1002 21:02:48.185270 2815477 system_pods.go:61] "kube-scheduler-functional-029371" [1974f0f4-e901-4694-a6eb-121fb450785f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:02:48.185274 2815477 system_pods.go:61] "storage-provisioner" [f02122ca-7ec7-49b6-a4fc-f334ffb1ff51] Running
	I1002 21:02:48.185278 2815477 system_pods.go:74] duration metric: took 3.909505ms to wait for pod list to return data ...
	I1002 21:02:48.185284 2815477 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:02:48.187883 2815477 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:02:48.187901 2815477 node_conditions.go:123] node cpu capacity is 2
	I1002 21:02:48.187911 2815477 node_conditions.go:105] duration metric: took 2.622944ms to run NodePressure ...
	I1002 21:02:48.187973 2815477 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 21:02:48.462975 2815477 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1002 21:02:48.467698 2815477 kubeadm.go:743] kubelet initialised
	I1002 21:02:48.467708 2815477 kubeadm.go:744] duration metric: took 4.721491ms waiting for restarted kubelet to initialise ...
	I1002 21:02:48.467722 2815477 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 21:02:48.480479 2815477 ops.go:34] apiserver oom_adj: -16
	I1002 21:02:48.480490 2815477 kubeadm.go:601] duration metric: took 25.929648904s to restartPrimaryControlPlane
	I1002 21:02:48.480498 2815477 kubeadm.go:402] duration metric: took 26.006606486s to StartCluster
	I1002 21:02:48.480512 2815477 settings.go:142] acquiring lock: {Name:mke92114e22bdbcff74119665eced9d6b9ac1b1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:02:48.480571 2815477 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-2783765/kubeconfig
	I1002 21:02:48.481163 2815477 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-2783765/kubeconfig: {Name:mkcf76851e68b723b0046b589af4cfa7ca9a3bdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:02:48.481372 2815477 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1002 21:02:48.481615 2815477 config.go:182] Loaded profile config "functional-029371": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 21:02:48.481655 2815477 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:02:48.481712 2815477 addons.go:69] Setting storage-provisioner=true in profile "functional-029371"
	I1002 21:02:48.481724 2815477 addons.go:238] Setting addon storage-provisioner=true in "functional-029371"
	W1002 21:02:48.481729 2815477 addons.go:247] addon storage-provisioner should already be in state true
	I1002 21:02:48.481747 2815477 host.go:66] Checking if "functional-029371" exists ...
	I1002 21:02:48.482158 2815477 cli_runner.go:164] Run: docker container inspect functional-029371 --format={{.State.Status}}
	I1002 21:02:48.483561 2815477 addons.go:69] Setting default-storageclass=true in profile "functional-029371"
	I1002 21:02:48.483578 2815477 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-029371"
	I1002 21:02:48.483892 2815477 cli_runner.go:164] Run: docker container inspect functional-029371 --format={{.State.Status}}
	I1002 21:02:48.484503 2815477 out.go:179] * Verifying Kubernetes components...
	I1002 21:02:48.488400 2815477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:02:48.512851 2815477 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:02:48.515887 2815477 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:02:48.515899 2815477 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:02:48.516019 2815477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-029371
	I1002 21:02:48.521341 2815477 addons.go:238] Setting addon default-storageclass=true in "functional-029371"
	W1002 21:02:48.521351 2815477 addons.go:247] addon default-storageclass should already be in state true
	I1002 21:02:48.521373 2815477 host.go:66] Checking if "functional-029371" exists ...
	I1002 21:02:48.521783 2815477 cli_runner.go:164] Run: docker container inspect functional-029371 --format={{.State.Status}}
	I1002 21:02:48.538897 2815477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36127 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/functional-029371/id_rsa Username:docker}
	I1002 21:02:48.567826 2815477 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:02:48.567842 2815477 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:02:48.567904 2815477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-029371
	I1002 21:02:48.600679 2815477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36127 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/functional-029371/id_rsa Username:docker}
	I1002 21:02:48.711735 2815477 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:02:48.759797 2815477 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:02:48.786435 2815477 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:02:49.590968 2815477 node_ready.go:35] waiting up to 6m0s for node "functional-029371" to be "Ready" ...
	I1002 21:02:49.615606 2815477 node_ready.go:49] node "functional-029371" is "Ready"
	I1002 21:02:49.615622 2815477 node_ready.go:38] duration metric: took 24.634917ms for node "functional-029371" to be "Ready" ...
	I1002 21:02:49.615634 2815477 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:02:49.615693 2815477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:02:49.629485 2815477 api_server.go:72] duration metric: took 1.148087915s to wait for apiserver process to appear ...
	I1002 21:02:49.629499 2815477 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:02:49.629516 2815477 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 21:02:49.637208 2815477 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 21:02:49.640155 2815477 addons.go:514] duration metric: took 1.158481935s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 21:02:49.656576 2815477 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1002 21:02:49.666241 2815477 api_server.go:141] control plane version: v1.34.1
	I1002 21:02:49.666257 2815477 api_server.go:131] duration metric: took 36.753865ms to wait for apiserver health ...
	I1002 21:02:49.666264 2815477 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:02:49.672996 2815477 system_pods.go:59] 8 kube-system pods found
	I1002 21:02:49.673015 2815477 system_pods.go:61] "coredns-66bc5c9577-bswh9" [effda912-e3ee-4d9f-af34-8abe9a9d3659] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:02:49.673022 2815477 system_pods.go:61] "etcd-functional-029371" [0bf73d2f-a733-44ce-b06a-2fbb6abee9d8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:02:49.673027 2815477 system_pods.go:61] "kindnet-9zmhd" [2d6be820-35d6-4183-800b-2b4a0971e0bc] Running
	I1002 21:02:49.673033 2815477 system_pods.go:61] "kube-apiserver-functional-029371" [ae91cf8a-78d4-4bc8-bbd8-b08725d3faeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:02:49.673042 2815477 system_pods.go:61] "kube-controller-manager-functional-029371" [1e5748f7-147f-49f4-ba46-881bcca8f6c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:02:49.673046 2815477 system_pods.go:61] "kube-proxy-xd2gs" [9f8999eb-7efb-417d-9a06-398ee7234f0b] Running
	I1002 21:02:49.673052 2815477 system_pods.go:61] "kube-scheduler-functional-029371" [1974f0f4-e901-4694-a6eb-121fb450785f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:02:49.673054 2815477 system_pods.go:61] "storage-provisioner" [f02122ca-7ec7-49b6-a4fc-f334ffb1ff51] Running
	I1002 21:02:49.673059 2815477 system_pods.go:74] duration metric: took 6.790629ms to wait for pod list to return data ...
	I1002 21:02:49.673066 2815477 default_sa.go:34] waiting for default service account to be created ...
	I1002 21:02:49.678854 2815477 default_sa.go:45] found service account: "default"
	I1002 21:02:49.678867 2815477 default_sa.go:55] duration metric: took 5.796577ms for default service account to be created ...
	I1002 21:02:49.678880 2815477 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 21:02:49.682663 2815477 system_pods.go:86] 8 kube-system pods found
	I1002 21:02:49.682693 2815477 system_pods.go:89] "coredns-66bc5c9577-bswh9" [effda912-e3ee-4d9f-af34-8abe9a9d3659] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:02:49.682701 2815477 system_pods.go:89] "etcd-functional-029371" [0bf73d2f-a733-44ce-b06a-2fbb6abee9d8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:02:49.682706 2815477 system_pods.go:89] "kindnet-9zmhd" [2d6be820-35d6-4183-800b-2b4a0971e0bc] Running
	I1002 21:02:49.682712 2815477 system_pods.go:89] "kube-apiserver-functional-029371" [ae91cf8a-78d4-4bc8-bbd8-b08725d3faeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:02:49.682717 2815477 system_pods.go:89] "kube-controller-manager-functional-029371" [1e5748f7-147f-49f4-ba46-881bcca8f6c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:02:49.682720 2815477 system_pods.go:89] "kube-proxy-xd2gs" [9f8999eb-7efb-417d-9a06-398ee7234f0b] Running
	I1002 21:02:49.682725 2815477 system_pods.go:89] "kube-scheduler-functional-029371" [1974f0f4-e901-4694-a6eb-121fb450785f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:02:49.682728 2815477 system_pods.go:89] "storage-provisioner" [f02122ca-7ec7-49b6-a4fc-f334ffb1ff51] Running
	I1002 21:02:49.682734 2815477 system_pods.go:126] duration metric: took 3.84973ms to wait for k8s-apps to be running ...
	I1002 21:02:49.682740 2815477 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 21:02:49.682815 2815477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:02:49.702778 2815477 system_svc.go:56] duration metric: took 20.015163ms WaitForService to wait for kubelet
	I1002 21:02:49.702796 2815477 kubeadm.go:586] duration metric: took 1.221402749s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:02:49.702813 2815477 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:02:49.705488 2815477 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:02:49.705503 2815477 node_conditions.go:123] node cpu capacity is 2
	I1002 21:02:49.705513 2815477 node_conditions.go:105] duration metric: took 2.695168ms to run NodePressure ...
	I1002 21:02:49.705525 2815477 start.go:241] waiting for startup goroutines ...
	I1002 21:02:49.705533 2815477 start.go:246] waiting for cluster config update ...
	I1002 21:02:49.705542 2815477 start.go:255] writing updated cluster config ...
	I1002 21:02:49.705853 2815477 ssh_runner.go:195] Run: rm -f paused
	I1002 21:02:49.709721 2815477 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:02:49.714686 2815477 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bswh9" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 21:02:51.720905 2815477 pod_ready.go:104] pod "coredns-66bc5c9577-bswh9" is not "Ready", error: <nil>
	W1002 21:02:54.220214 2815477 pod_ready.go:104] pod "coredns-66bc5c9577-bswh9" is not "Ready", error: <nil>
	W1002 21:02:56.721785 2815477 pod_ready.go:104] pod "coredns-66bc5c9577-bswh9" is not "Ready", error: <nil>
	I1002 21:02:58.219890 2815477 pod_ready.go:94] pod "coredns-66bc5c9577-bswh9" is "Ready"
	I1002 21:02:58.219904 2815477 pod_ready.go:86] duration metric: took 8.505203843s for pod "coredns-66bc5c9577-bswh9" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:02:58.222070 2815477 pod_ready.go:83] waiting for pod "etcd-functional-029371" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:02:58.226512 2815477 pod_ready.go:94] pod "etcd-functional-029371" is "Ready"
	I1002 21:02:58.226525 2815477 pod_ready.go:86] duration metric: took 4.443939ms for pod "etcd-functional-029371" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:02:58.228769 2815477 pod_ready.go:83] waiting for pod "kube-apiserver-functional-029371" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:02:58.233001 2815477 pod_ready.go:94] pod "kube-apiserver-functional-029371" is "Ready"
	I1002 21:02:58.233014 2815477 pod_ready.go:86] duration metric: took 4.234122ms for pod "kube-apiserver-functional-029371" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:02:58.235462 2815477 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-029371" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:02:58.417868 2815477 pod_ready.go:94] pod "kube-controller-manager-functional-029371" is "Ready"
	I1002 21:02:58.417882 2815477 pod_ready.go:86] duration metric: took 182.40712ms for pod "kube-controller-manager-functional-029371" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:02:58.617898 2815477 pod_ready.go:83] waiting for pod "kube-proxy-xd2gs" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:02:59.018324 2815477 pod_ready.go:94] pod "kube-proxy-xd2gs" is "Ready"
	I1002 21:02:59.018348 2815477 pod_ready.go:86] duration metric: took 400.427253ms for pod "kube-proxy-xd2gs" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:02:59.218528 2815477 pod_ready.go:83] waiting for pod "kube-scheduler-functional-029371" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:03:00.419083 2815477 pod_ready.go:94] pod "kube-scheduler-functional-029371" is "Ready"
	I1002 21:03:00.419100 2815477 pod_ready.go:86] duration metric: took 1.200558587s for pod "kube-scheduler-functional-029371" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:03:00.419111 2815477 pod_ready.go:40] duration metric: took 10.709369394s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:03:00.480084 2815477 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 21:03:00.483562 2815477 out.go:179] * Done! kubectl is now configured to use "functional-029371" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f7ec92ef7ee86       35f3cbee4fb77       4 minutes ago       Running             nginx                     0                   d2770ddcd54ff       nginx-svc                                   default
	e9301c91add10       ba04bb24b9575       4 minutes ago       Running             storage-provisioner       2                   d016164eeb92f       storage-provisioner                         kube-system
	4a78f66b8de9a       7eb2c6ff0c5a7       4 minutes ago       Running             kube-controller-manager   2                   f8f65514862b2       kube-controller-manager-functional-029371   kube-system
	ec86407873fe8       43911e833d64d       4 minutes ago       Running             kube-apiserver            0                   4bff9fa30870b       kube-apiserver-functional-029371            kube-system
	0dd8df4eab17a       b5f57ec6b9867       4 minutes ago       Running             kube-scheduler            1                   7bbfe7c234b3a       kube-scheduler-functional-029371            kube-system
	ff6176ec7ae2d       a1894772a478e       4 minutes ago       Running             etcd                      1                   f385ef2d71fec       etcd-functional-029371                      kube-system
	9363aff35a4ac       7eb2c6ff0c5a7       4 minutes ago       Exited              kube-controller-manager   1                   f8f65514862b2       kube-controller-manager-functional-029371   kube-system
	bb62981a90b2e       05baa95f5142d       4 minutes ago       Running             kube-proxy                1                   095dc989df9d3       kube-proxy-xd2gs                            kube-system
	9f4fa4e6cafcd       ba04bb24b9575       4 minutes ago       Exited              storage-provisioner       1                   d016164eeb92f       storage-provisioner                         kube-system
	e13a9218fb36c       138784d87c9c5       4 minutes ago       Running             coredns                   1                   28a525d91513d       coredns-66bc5c9577-bswh9                    kube-system
	c0544bb436a09       b1a8c6f707935       4 minutes ago       Running             kindnet-cni               1                   ebe0641167404       kindnet-9zmhd                               kube-system
	6e626b9db7e71       138784d87c9c5       5 minutes ago       Exited              coredns                   0                   28a525d91513d       coredns-66bc5c9577-bswh9                    kube-system
	fa91f8ea7d10f       b1a8c6f707935       5 minutes ago       Exited              kindnet-cni               0                   ebe0641167404       kindnet-9zmhd                               kube-system
	71353644d4012       05baa95f5142d       5 minutes ago       Exited              kube-proxy                0                   095dc989df9d3       kube-proxy-xd2gs                            kube-system
	97c3f3f108740       a1894772a478e       6 minutes ago       Exited              etcd                      0                   f385ef2d71fec       etcd-functional-029371                      kube-system
	37a0176519c77       b5f57ec6b9867       6 minutes ago       Exited              kube-scheduler            0                   7bbfe7c234b3a       kube-scheduler-functional-029371            kube-system
	
	
	==> containerd <==
	Oct 02 21:03:56 functional-029371 containerd[3583]: time="2025-10-02T21:03:56.670560889Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Oct 02 21:03:56 functional-029371 containerd[3583]: time="2025-10-02T21:03:56.673171295Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 21:03:56 functional-029371 containerd[3583]: time="2025-10-02T21:03:56.801543806Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 21:03:57 functional-029371 containerd[3583]: time="2025-10-02T21:03:57.085752582Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 21:03:57 functional-029371 containerd[3583]: time="2025-10-02T21:03:57.086716299Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=10999"
	Oct 02 21:04:42 functional-029371 containerd[3583]: time="2025-10-02T21:04:42.672458224Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Oct 02 21:04:42 functional-029371 containerd[3583]: time="2025-10-02T21:04:42.675467202Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 21:04:42 functional-029371 containerd[3583]: time="2025-10-02T21:04:42.805767171Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 21:04:43 functional-029371 containerd[3583]: time="2025-10-02T21:04:43.073707212Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 21:04:43 functional-029371 containerd[3583]: time="2025-10-02T21:04:43.073835321Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=10967"
	Oct 02 21:04:47 functional-029371 containerd[3583]: time="2025-10-02T21:04:47.670437765Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Oct 02 21:04:47 functional-029371 containerd[3583]: time="2025-10-02T21:04:47.672834359Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 21:04:47 functional-029371 containerd[3583]: time="2025-10-02T21:04:47.800076627Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 21:04:48 functional-029371 containerd[3583]: time="2025-10-02T21:04:48.093535524Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 21:04:48 functional-029371 containerd[3583]: time="2025-10-02T21:04:48.093640035Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=10999"
	Oct 02 21:06:07 functional-029371 containerd[3583]: time="2025-10-02T21:06:07.670153082Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Oct 02 21:06:07 functional-029371 containerd[3583]: time="2025-10-02T21:06:07.672583301Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 21:06:07 functional-029371 containerd[3583]: time="2025-10-02T21:06:07.794821274Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 21:06:08 functional-029371 containerd[3583]: time="2025-10-02T21:06:08.182382037Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 21:06:08 functional-029371 containerd[3583]: time="2025-10-02T21:06:08.182484693Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=21215"
	Oct 02 21:06:10 functional-029371 containerd[3583]: time="2025-10-02T21:06:10.670600679Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Oct 02 21:06:10 functional-029371 containerd[3583]: time="2025-10-02T21:06:10.673059452Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 21:06:10 functional-029371 containerd[3583]: time="2025-10-02T21:06:10.823429781Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 21:06:11 functional-029371 containerd[3583]: time="2025-10-02T21:06:11.133125047Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 21:06:11 functional-029371 containerd[3583]: time="2025-10-02T21:06:11.133170176Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=10999"
	
	
	==> coredns [6e626b9db7e71cca13b7f0fa58c29712669c45287dde64cab1606f74ddd60435] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36470 - 16817 "HINFO IN 8350429670381813791.6003931427677546625. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021892735s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e13a9218fb36c96f900452fa4804b05d1af634f65dabde0e99e4745bf3bdd984] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51972 - 42555 "HINFO IN 2726958689615771147.4044054909872593520. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.046976422s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               functional-029371
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-029371
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=functional-029371
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T21_01_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 21:01:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-029371
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 21:07:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 21:03:48 +0000   Thu, 02 Oct 2025 21:01:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 21:03:48 +0000   Thu, 02 Oct 2025 21:01:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 21:03:48 +0000   Thu, 02 Oct 2025 21:01:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 21:03:48 +0000   Thu, 02 Oct 2025 21:02:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-029371
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 fd50d735b20e43169e671ed5ecbfe749
	  System UUID:                482999fa-369e-4d58-bd97-98172b118eff
	  Boot ID:                    ddea27b5-1bb4-4ff4-b6ce-678e2308ca3c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-7d85dfc575-hf52j          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 coredns-66bc5c9577-bswh9                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m59s
	  kube-system                 etcd-functional-029371                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m4s
	  kube-system                 kindnet-9zmhd                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m59s
	  kube-system                 kube-apiserver-functional-029371             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 kube-controller-manager-functional-029371    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m4s
	  kube-system                 kube-proxy-xd2gs                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 kube-scheduler-functional-029371             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m4s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m58s                  kube-proxy       
	  Normal   Starting                 4m29s                  kube-proxy       
	  Warning  CgroupV1                 6m12s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m12s (x8 over 6m12s)  kubelet          Node functional-029371 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m12s (x8 over 6m12s)  kubelet          Node functional-029371 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m12s (x7 over 6m12s)  kubelet          Node functional-029371 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 6m5s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m5s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m4s                   kubelet          Node functional-029371 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m4s                   kubelet          Node functional-029371 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m4s                   kubelet          Node functional-029371 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m4s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           6m                     node-controller  Node functional-029371 event: Registered Node functional-029371 in Controller
	  Normal   NodeReady                5m17s                  kubelet          Node functional-029371 status is now: NodeReady
	  Normal   Starting                 4m37s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m37s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m37s (x8 over 4m37s)  kubelet          Node functional-029371 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m37s (x8 over 4m37s)  kubelet          Node functional-029371 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m37s (x7 over 4m37s)  kubelet          Node functional-029371 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  4m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           4m28s                  node-controller  Node functional-029371 event: Registered Node functional-029371 in Controller
	
	
	==> dmesg <==
	[Oct 2 20:00] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
	[Oct 2 20:51] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [97c3f3f10874020e0999f5e88cbf6e33bbdc919ddc54715ffb8f68285cfb4890] <==
	{"level":"warn","ts":"2025-10-02T21:01:09.091121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:01:09.113696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:01:09.140304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:01:09.163581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:01:09.214029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:01:09.290338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38702","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T21:02:00.115325Z","caller":"traceutil/trace.go:172","msg":"trace[291594329] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"103.807712ms","start":"2025-10-02T21:02:00.011494Z","end":"2025-10-02T21:02:00.115302Z","steps":["trace[291594329] 'process raft request'  (duration: 103.662216ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T21:02:38.044223Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T21:02:38.044273Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-029371","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-02T21:02:38.044393Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T21:02:38.045901Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T21:02:38.047455Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T21:02:38.047522Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-02T21:02:38.047636Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-02T21:02:38.047654Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-02T21:02:38.047956Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T21:02:38.048012Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T21:02:38.048024Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-02T21:02:38.048106Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T21:02:38.048130Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T21:02:38.048140Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T21:02:38.050950Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-02T21:02:38.051086Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T21:02:38.051113Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-02T21:02:38.051121Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-029371","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [ff6176ec7ae2de2fdb8b2e8cbe1b6888a2b29bb1783765d18ed72f5fa5850090] <==
	{"level":"warn","ts":"2025-10-02T21:02:45.028951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.057158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.090982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.102413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.129886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.140808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.165726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.180102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.198523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.236131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.251965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.277163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.288844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.309935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.328810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.347740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.364690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.382356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.412787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.432090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.448087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.478001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.495710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.508017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:02:45.564423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38004","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:07:18 up 16:49,  0 user,  load average: 0.12, 0.84, 2.56
	Linux functional-029371 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c0544bb436a0906cfd062760bdbcd21a2d29e77e585ae36ebb930aa43c485e98] <==
	I1002 21:05:08.711981       1 main.go:301] handling current node
	I1002 21:05:18.711449       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:05:18.711676       1 main.go:301] handling current node
	I1002 21:05:28.720234       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:05:28.720423       1 main.go:301] handling current node
	I1002 21:05:38.718288       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:05:38.718334       1 main.go:301] handling current node
	I1002 21:05:48.712010       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:05:48.712138       1 main.go:301] handling current node
	I1002 21:05:58.716117       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:05:58.716155       1 main.go:301] handling current node
	I1002 21:06:08.720086       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:06:08.720173       1 main.go:301] handling current node
	I1002 21:06:18.711328       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:06:18.711363       1 main.go:301] handling current node
	I1002 21:06:28.711919       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:06:28.711958       1 main.go:301] handling current node
	I1002 21:06:38.711798       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:06:38.711835       1 main.go:301] handling current node
	I1002 21:06:48.712055       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:06:48.712091       1 main.go:301] handling current node
	I1002 21:06:58.713628       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:06:58.713665       1 main.go:301] handling current node
	I1002 21:07:08.713007       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:07:08.713042       1 main.go:301] handling current node
	
	
	==> kindnet [fa91f8ea7d10ffa773ed1c591ed0215f42b98b1c763fd23db5df45e664688342] <==
	I1002 21:01:19.695785       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 21:01:19.696051       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1002 21:01:19.696173       1 main.go:148] setting mtu 1500 for CNI 
	I1002 21:01:19.696193       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 21:01:19.696203       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T21:01:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 21:01:19.891576       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 21:01:19.891800       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 21:01:19.891903       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 21:01:19.892752       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 21:01:49.891919       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 21:01:49.892931       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1002 21:01:49.892947       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 21:01:49.893282       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1002 21:01:51.493072       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 21:01:51.493164       1 metrics.go:72] Registering metrics
	I1002 21:01:51.493395       1 controller.go:711] "Syncing nftables rules"
	I1002 21:01:59.897194       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:01:59.897288       1 main.go:301] handling current node
	I1002 21:02:09.897832       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:02:09.897866       1 main.go:301] handling current node
	I1002 21:02:19.895132       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:02:19.895159       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ec86407873fe8df85e4887b5c5b2b21b30f5b2fe009c3928a9a2d4b98c874b5a] <==
	I1002 21:02:46.392200       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1002 21:02:46.392769       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 21:02:46.392841       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 21:02:46.394347       1 aggregator.go:171] initial CRD sync complete...
	I1002 21:02:46.394462       1 autoregister_controller.go:144] Starting autoregister controller
	I1002 21:02:46.394733       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 21:02:46.394833       1 cache.go:39] Caches are synced for autoregister controller
	I1002 21:02:46.394932       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 21:02:46.408635       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 21:02:46.408912       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 21:02:46.410234       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 21:02:46.418967       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 21:02:46.721100       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 21:02:47.095033       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1002 21:02:47.431828       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1002 21:02:47.433331       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 21:02:47.439034       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 21:02:48.174490       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 21:02:48.324212       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 21:02:48.433235       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 21:02:48.444842       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 21:02:50.077548       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 21:03:03.853124       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.101.168.209"}
	I1002 21:03:10.555108       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.142.42"}
	I1002 21:03:19.105685       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.101.24.217"}
	
	
	==> kube-controller-manager [4a78f66b8de9abe5c9ae735c1c02e72e3256c9e5545188d321dac91ce1606b57] <==
	I1002 21:02:49.700250       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 21:02:49.700565       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 21:02:49.701186       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 21:02:49.703927       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 21:02:49.707401       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 21:02:49.707597       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 21:02:49.710897       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 21:02:49.714320       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 21:02:49.714957       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 21:02:49.717140       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 21:02:49.719238       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:02:49.719500       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 21:02:49.719678       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 21:02:49.719541       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 21:02:49.720464       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1002 21:02:49.719445       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 21:02:49.719430       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1002 21:02:49.726201       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 21:02:49.731606       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 21:02:49.731872       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 21:02:49.742298       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 21:02:49.756078       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 21:02:49.756275       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 21:02:49.760350       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:02:49.763444       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-controller-manager [9363aff35a4acb1420657199acac0ca01f30c32a92243e6ea96ec31d175aae16] <==
	I1002 21:02:30.185628       1 serving.go:386] Generated self-signed cert in-memory
	I1002 21:02:31.435116       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1002 21:02:31.435148       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:02:31.436649       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1002 21:02:31.436969       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1002 21:02:31.437036       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1002 21:02:31.437053       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1002 21:02:41.438751       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [71353644d4012c4d79872e308445fb70b121b226b85b2a01cfa5589208cf6cd7] <==
	I1002 21:01:19.631691       1 server_linux.go:53] "Using iptables proxy"
	I1002 21:01:19.774390       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 21:01:19.875501       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 21:01:19.875540       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 21:01:19.876304       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 21:01:19.929962       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:01:19.930014       1 server_linux.go:132] "Using iptables Proxier"
	I1002 21:01:19.933989       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 21:01:19.934490       1 server.go:527] "Version info" version="v1.34.1"
	I1002 21:01:19.934650       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:01:19.938894       1 config.go:106] "Starting endpoint slice config controller"
	I1002 21:01:19.939089       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 21:01:19.939124       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 21:01:19.939238       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 21:01:19.939971       1 config.go:200] "Starting service config controller"
	I1002 21:01:19.940126       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 21:01:19.940238       1 config.go:309] "Starting node config controller"
	I1002 21:01:19.940333       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 21:01:20.043367       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 21:01:20.043410       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 21:01:20.043424       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 21:01:20.048713       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [bb62981a90b2e6919f84a4d9b34bbfb6dbeaf7ea0fca18ddd27c59c4cc7382b7] <==
	I1002 21:02:28.761611       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1002 21:02:28.762697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-029371&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 21:02:30.012349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-029371&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 21:02:31.803349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-029371&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 21:02:36.781864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-029371&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1002 21:02:48.363238       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 21:02:48.365233       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 21:02:48.365530       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 21:02:48.400401       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:02:48.400613       1 server_linux.go:132] "Using iptables Proxier"
	I1002 21:02:48.415578       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 21:02:48.416007       1 server.go:527] "Version info" version="v1.34.1"
	I1002 21:02:48.416157       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:02:48.418783       1 config.go:200] "Starting service config controller"
	I1002 21:02:48.418810       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 21:02:48.419572       1 config.go:106] "Starting endpoint slice config controller"
	I1002 21:02:48.419695       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 21:02:48.419816       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 21:02:48.419937       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 21:02:48.420888       1 config.go:309] "Starting node config controller"
	I1002 21:02:48.421046       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 21:02:48.421155       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 21:02:48.436399       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 21:02:48.523114       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 21:02:48.592161       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0dd8df4eab17a4a504ba75dcd53063299a3901716a3ee868366c80c5f68c65a9] <==
	I1002 21:02:43.746760       1 serving.go:386] Generated self-signed cert in-memory
	W1002 21:02:46.263780       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 21:02:46.263822       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 21:02:46.263834       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 21:02:46.264102       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 21:02:46.381416       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 21:02:46.381449       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:02:46.389679       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:02:46.390180       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:02:46.393786       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 21:02:46.394631       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 21:02:46.490354       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [37a0176519c77084790d182a341b7648e186e2e1a614314dea11c7e9d8b9dcda] <==
	E1002 21:01:10.505154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 21:01:10.505442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 21:01:10.505584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 21:01:10.511789       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 21:01:10.512019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 21:01:10.512128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 21:01:10.512226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 21:01:10.512321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 21:01:10.512551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 21:01:10.516026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 21:01:10.516183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 21:01:11.341487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 21:01:11.368611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 21:01:11.429159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 21:01:11.434896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 21:01:11.488406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 21:01:11.577725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 21:01:11.588312       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1002 21:01:13.557394       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:02:38.107216       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1002 21:02:38.107251       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1002 21:02:38.107270       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1002 21:02:38.107389       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:02:38.107422       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1002 21:02:38.107482       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 02 21:05:16 functional-029371 kubelet[4514]: E1002 21:05:16.669733    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hf52j" podUID="3f468c29-a57d-4a49-b576-7dfbb2cf1868"
	Oct 02 21:05:25 functional-029371 kubelet[4514]: E1002 21:05:25.670284    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d97caa8e-1329-4661-b54c-ddad7ae3095f"
	Oct 02 21:05:30 functional-029371 kubelet[4514]: E1002 21:05:30.670075    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hf52j" podUID="3f468c29-a57d-4a49-b576-7dfbb2cf1868"
	Oct 02 21:05:40 functional-029371 kubelet[4514]: E1002 21:05:40.669974    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d97caa8e-1329-4661-b54c-ddad7ae3095f"
	Oct 02 21:05:44 functional-029371 kubelet[4514]: E1002 21:05:44.670333    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hf52j" podUID="3f468c29-a57d-4a49-b576-7dfbb2cf1868"
	Oct 02 21:05:53 functional-029371 kubelet[4514]: E1002 21:05:53.670353    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d97caa8e-1329-4661-b54c-ddad7ae3095f"
	Oct 02 21:05:55 functional-029371 kubelet[4514]: E1002 21:05:55.670255    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hf52j" podUID="3f468c29-a57d-4a49-b576-7dfbb2cf1868"
	Oct 02 21:06:08 functional-029371 kubelet[4514]: E1002 21:06:08.182811    4514 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 02 21:06:08 functional-029371 kubelet[4514]: E1002 21:06:08.182867    4514 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 02 21:06:08 functional-029371 kubelet[4514]: E1002 21:06:08.182938    4514 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(d97caa8e-1329-4661-b54c-ddad7ae3095f): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 21:06:08 functional-029371 kubelet[4514]: E1002 21:06:08.182977    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d97caa8e-1329-4661-b54c-ddad7ae3095f"
	Oct 02 21:06:11 functional-029371 kubelet[4514]: E1002 21:06:11.133465    4514 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Oct 02 21:06:11 functional-029371 kubelet[4514]: E1002 21:06:11.133543    4514 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Oct 02 21:06:11 functional-029371 kubelet[4514]: E1002 21:06:11.133622    4514 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-hf52j_default(3f468c29-a57d-4a49-b576-7dfbb2cf1868): ErrImagePull: failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 21:06:11 functional-029371 kubelet[4514]: E1002 21:06:11.133661    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hf52j" podUID="3f468c29-a57d-4a49-b576-7dfbb2cf1868"
	Oct 02 21:06:22 functional-029371 kubelet[4514]: E1002 21:06:22.670350    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d97caa8e-1329-4661-b54c-ddad7ae3095f"
	Oct 02 21:06:25 functional-029371 kubelet[4514]: E1002 21:06:25.669726    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hf52j" podUID="3f468c29-a57d-4a49-b576-7dfbb2cf1868"
	Oct 02 21:06:34 functional-029371 kubelet[4514]: E1002 21:06:34.669343    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d97caa8e-1329-4661-b54c-ddad7ae3095f"
	Oct 02 21:06:39 functional-029371 kubelet[4514]: E1002 21:06:39.670182    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hf52j" podUID="3f468c29-a57d-4a49-b576-7dfbb2cf1868"
	Oct 02 21:06:49 functional-029371 kubelet[4514]: E1002 21:06:49.669582    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d97caa8e-1329-4661-b54c-ddad7ae3095f"
	Oct 02 21:06:52 functional-029371 kubelet[4514]: E1002 21:06:52.670050    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hf52j" podUID="3f468c29-a57d-4a49-b576-7dfbb2cf1868"
	Oct 02 21:07:02 functional-029371 kubelet[4514]: E1002 21:07:02.673078    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d97caa8e-1329-4661-b54c-ddad7ae3095f"
	Oct 02 21:07:03 functional-029371 kubelet[4514]: E1002 21:07:03.670572    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hf52j" podUID="3f468c29-a57d-4a49-b576-7dfbb2cf1868"
	Oct 02 21:07:16 functional-029371 kubelet[4514]: E1002 21:07:16.671162    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-hf52j" podUID="3f468c29-a57d-4a49-b576-7dfbb2cf1868"
	Oct 02 21:07:17 functional-029371 kubelet[4514]: E1002 21:07:17.670222    4514 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d97caa8e-1329-4661-b54c-ddad7ae3095f"
	
	
	==> storage-provisioner [9f4fa4e6cafcdf15d3a652b129916916db3a35a6bba6315257415306d82081ac] <==
	I1002 21:02:28.534730       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 21:02:28.536501       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [e9301c91add10f7b8320a98341322365ab0397a2b58eb545f437ffcdcab5d2df] <==
	W1002 21:06:53.563538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:06:55.567244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:06:55.574361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:06:57.577683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:06:57.582252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:06:59.586034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:06:59.593034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:07:01.595611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:07:01.600291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:07:03.603199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:07:03.607861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:07:05.610358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:07:05.615551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:07:07.618946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:07:07.623616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:07:09.627389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:07:09.633949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:07:11.636419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:07:11.641852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:07:13.644525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:07:13.649404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:07:15.652248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:07:15.656595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:07:17.660013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:07:17.664985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-029371 -n functional-029371
helpers_test.go:269: (dbg) Run:  kubectl --context functional-029371 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-connect-7d85dfc575-hf52j sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-029371 describe pod hello-node-connect-7d85dfc575-hf52j sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-029371 describe pod hello-node-connect-7d85dfc575-hf52j sp-pod:

                                                
                                                
-- stdout --
	Name:             hello-node-connect-7d85dfc575-hf52j
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-029371/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 21:03:19 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xwrnm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xwrnm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-hf52j to functional-029371
	  Normal   Pulling    69s (x5 over 4m)     kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     68s (x5 over 4m)     kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     68s (x5 over 4m)     kubelet            Error: ErrImagePull
	  Normal   BackOff    3s (x15 over 3m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     3s (x15 over 3m59s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-029371/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 21:03:16 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f9whq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-f9whq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  4m2s                  default-scheduler  Successfully assigned default/sp-pod to functional-029371
	  Warning  Failed     2m36s (x4 over 4m2s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    72s (x5 over 4m3s)    kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     71s (x5 over 4m2s)    kubelet            Error: ErrImagePull
	  Warning  Failed     71s                   kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2s (x15 over 4m2s)    kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     2s (x15 over 4m2s)    kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (248.53s)

                                                
                                    

Test pass (298/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 36.64
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 30.79
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 159.07
29 TestAddons/serial/Volcano 41.76
31 TestAddons/serial/GCPAuth/Namespaces 0.23
32 TestAddons/serial/GCPAuth/FakeCredentials 8.9
35 TestAddons/parallel/Registry 16.11
36 TestAddons/parallel/RegistryCreds 0.75
37 TestAddons/parallel/Ingress 18.83
38 TestAddons/parallel/InspektorGadget 6.3
39 TestAddons/parallel/MetricsServer 5.84
41 TestAddons/parallel/CSI 49.63
42 TestAddons/parallel/Headlamp 12.3
43 TestAddons/parallel/CloudSpanner 6.58
44 TestAddons/parallel/LocalPath 51.38
45 TestAddons/parallel/NvidiaDevicePlugin 6.59
46 TestAddons/parallel/Yakd 11.97
48 TestAddons/StoppedEnableDisable 12.33
49 TestCertOptions 36.97
50 TestCertExpiration 235.3
52 TestForceSystemdFlag 46.6
53 TestForceSystemdEnv 50.11
59 TestErrorSpam/setup 33.17
60 TestErrorSpam/start 0.84
61 TestErrorSpam/status 1.13
62 TestErrorSpam/pause 1.73
63 TestErrorSpam/unpause 1.9
64 TestErrorSpam/stop 12.17
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 76.43
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 7.45
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.1
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.64
76 TestFunctional/serial/CacheCmd/cache/add_local 1.26
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.9
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.17
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
84 TestFunctional/serial/ExtraConfig 42.22
85 TestFunctional/serial/ComponentHealth 0.11
86 TestFunctional/serial/LogsCmd 1.5
87 TestFunctional/serial/LogsFileCmd 1.48
88 TestFunctional/serial/InvalidService 4.58
90 TestFunctional/parallel/ConfigCmd 0.52
92 TestFunctional/parallel/DryRun 0.62
93 TestFunctional/parallel/InternationalLanguage 0.26
94 TestFunctional/parallel/StatusCmd 1.6
99 TestFunctional/parallel/AddonsCmd 0.14
102 TestFunctional/parallel/SSHCmd 0.75
103 TestFunctional/parallel/CpCmd 2.46
105 TestFunctional/parallel/FileSync 0.28
106 TestFunctional/parallel/CertSync 1.65
110 TestFunctional/parallel/NodeLabels 0.14
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.79
114 TestFunctional/parallel/License 0.34
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.46
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ServiceCmd/DeployApp 350.2
127 TestFunctional/parallel/ServiceCmd/List 0.51
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
130 TestFunctional/parallel/ServiceCmd/Format 0.39
131 TestFunctional/parallel/ServiceCmd/URL 0.39
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
133 TestFunctional/parallel/ProfileCmd/profile_list 0.42
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
135 TestFunctional/parallel/MountCmd/any-port 8.34
136 TestFunctional/parallel/MountCmd/specific-port 2.3
137 TestFunctional/parallel/MountCmd/VerifyCleanup 2.37
138 TestFunctional/parallel/Version/short 0.06
139 TestFunctional/parallel/Version/components 1.16
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
144 TestFunctional/parallel/ImageCommands/ImageBuild 3.55
145 TestFunctional/parallel/ImageCommands/Setup 0.67
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.21
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.1
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.29
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.61
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
153 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
154 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
155 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 216.68
164 TestMultiControlPlane/serial/DeployApp 50.15
165 TestMultiControlPlane/serial/PingHostFromPods 1.76
166 TestMultiControlPlane/serial/AddWorkerNode 60.95
167 TestMultiControlPlane/serial/NodeLabels 0.19
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.11
169 TestMultiControlPlane/serial/CopyFile 19.61
170 TestMultiControlPlane/serial/StopSecondaryNode 12.8
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.8
172 TestMultiControlPlane/serial/RestartSecondaryNode 14.17
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.53
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 98.54
175 TestMultiControlPlane/serial/DeleteSecondaryNode 10.85
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.78
177 TestMultiControlPlane/serial/StopCluster 35.95
178 TestMultiControlPlane/serial/RestartCluster 61.72
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.75
180 TestMultiControlPlane/serial/AddSecondaryNode 82.43
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.07
185 TestJSONOutput/start/Command 50.71
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.77
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.66
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.83
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 52.48
211 TestKicCustomNetwork/use_default_bridge_network 37.46
212 TestKicExistingNetwork 35.84
213 TestKicCustomSubnet 38.99
214 TestKicStaticIP 37.15
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 73.2
219 TestMountStart/serial/StartWithMountFirst 9.33
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 6.84
222 TestMountStart/serial/VerifyMountSecond 0.38
223 TestMountStart/serial/DeleteFirst 1.63
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.22
226 TestMountStart/serial/RestartStopped 7.61
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 109.25
231 TestMultiNode/serial/DeployApp2Nodes 5.62
232 TestMultiNode/serial/PingHostFrom2Pods 0.99
233 TestMultiNode/serial/AddNode 29.01
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.7
236 TestMultiNode/serial/CopyFile 10.32
237 TestMultiNode/serial/StopNode 2.26
238 TestMultiNode/serial/StartAfterStop 7.55
239 TestMultiNode/serial/RestartKeepsNodes 79.52
240 TestMultiNode/serial/DeleteNode 5.6
241 TestMultiNode/serial/StopMultiNode 23.85
242 TestMultiNode/serial/RestartMultiNode 52.49
243 TestMultiNode/serial/ValidateNameConflict 37.37
248 TestPreload 157.92
250 TestScheduledStopUnix 111.48
253 TestInsufficientStorage 11.24
254 TestRunningBinaryUpgrade 61.75
256 TestKubernetesUpgrade 362.86
257 TestMissingContainerUpgrade 152.59
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
260 TestNoKubernetes/serial/StartWithK8s 40.55
261 TestNoKubernetes/serial/StartWithStopK8s 26.14
262 TestNoKubernetes/serial/Start 8.79
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
264 TestNoKubernetes/serial/ProfileList 0.68
265 TestNoKubernetes/serial/Stop 1.22
266 TestNoKubernetes/serial/StartNoArgs 6.96
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
268 TestStoppedBinaryUpgrade/Setup 1.53
269 TestStoppedBinaryUpgrade/Upgrade 67.99
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.84
279 TestPause/serial/Start 83.25
280 TestPause/serial/SecondStartNoReconfiguration 7.17
281 TestPause/serial/Pause 0.7
282 TestPause/serial/VerifyStatus 0.32
283 TestPause/serial/Unpause 0.85
284 TestPause/serial/PauseAgain 0.95
285 TestPause/serial/DeletePaused 2.75
286 TestPause/serial/VerifyDeletedResources 0.5
294 TestNetworkPlugins/group/false 5.43
299 TestStartStop/group/old-k8s-version/serial/FirstStart 63.73
300 TestStartStop/group/old-k8s-version/serial/DeployApp 10.4
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.19
302 TestStartStop/group/old-k8s-version/serial/Stop 11.97
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
304 TestStartStop/group/old-k8s-version/serial/SecondStart 55.54
305 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
306 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.15
308 TestStartStop/group/no-preload/serial/FirstStart 75.68
309 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
310 TestStartStop/group/old-k8s-version/serial/Pause 3.65
312 TestStartStop/group/embed-certs/serial/FirstStart 91.38
313 TestStartStop/group/no-preload/serial/DeployApp 8.39
314 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.09
315 TestStartStop/group/no-preload/serial/Stop 12.02
316 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
317 TestStartStop/group/no-preload/serial/SecondStart 53.27
318 TestStartStop/group/embed-certs/serial/DeployApp 9.51
319 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.86
320 TestStartStop/group/embed-certs/serial/Stop 12.84
321 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
322 TestStartStop/group/embed-certs/serial/SecondStart 49.16
323 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
324 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.1
325 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
326 TestStartStop/group/no-preload/serial/Pause 3.18
328 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 87.11
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
332 TestStartStop/group/embed-certs/serial/Pause 3.92
334 TestStartStop/group/newest-cni/serial/FirstStart 42.7
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.01
337 TestStartStop/group/newest-cni/serial/Stop 1.27
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
339 TestStartStop/group/newest-cni/serial/SecondStart 18.53
340 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.48
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
344 TestStartStop/group/newest-cni/serial/Pause 3.24
345 TestNetworkPlugins/group/auto/Start 86.75
346 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.71
347 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.36
348 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
349 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 55.66
350 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
351 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
352 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
353 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.66
354 TestNetworkPlugins/group/auto/KubeletFlags 0.63
355 TestNetworkPlugins/group/auto/NetCatPod 11.52
356 TestNetworkPlugins/group/kindnet/Start 59.73
357 TestNetworkPlugins/group/auto/DNS 0.22
358 TestNetworkPlugins/group/auto/Localhost 0.16
359 TestNetworkPlugins/group/auto/HairPin 0.15
360 TestNetworkPlugins/group/calico/Start 56.32
361 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
362 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
363 TestNetworkPlugins/group/kindnet/NetCatPod 11.33
364 TestNetworkPlugins/group/kindnet/DNS 0.24
365 TestNetworkPlugins/group/kindnet/Localhost 0.2
366 TestNetworkPlugins/group/kindnet/HairPin 0.27
367 TestNetworkPlugins/group/calico/ControllerPod 6.01
368 TestNetworkPlugins/group/calico/KubeletFlags 0.4
369 TestNetworkPlugins/group/calico/NetCatPod 12.43
370 TestNetworkPlugins/group/custom-flannel/Start 70.88
371 TestNetworkPlugins/group/calico/DNS 0.26
372 TestNetworkPlugins/group/calico/Localhost 0.3
373 TestNetworkPlugins/group/calico/HairPin 0.33
374 TestNetworkPlugins/group/enable-default-cni/Start 72.7
375 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.42
376 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.36
377 TestNetworkPlugins/group/custom-flannel/DNS 0.21
378 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
379 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
380 TestNetworkPlugins/group/flannel/Start 68.05
381 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.38
382 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.34
383 TestNetworkPlugins/group/enable-default-cni/DNS 0.27
384 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
385 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
386 TestNetworkPlugins/group/bridge/Start 51.13
387 TestNetworkPlugins/group/flannel/ControllerPod 6
388 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
389 TestNetworkPlugins/group/flannel/NetCatPod 10.35
390 TestNetworkPlugins/group/flannel/DNS 0.18
391 TestNetworkPlugins/group/flannel/Localhost 0.14
392 TestNetworkPlugins/group/flannel/HairPin 0.16
393 TestNetworkPlugins/group/bridge/KubeletFlags 0.38
394 TestNetworkPlugins/group/bridge/NetCatPod 10.36
395 TestNetworkPlugins/group/bridge/DNS 0.24
396 TestNetworkPlugins/group/bridge/Localhost 0.21
397 TestNetworkPlugins/group/bridge/HairPin 0.29
x
+
TestDownloadOnly/v1.28.0/json-events (36.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-547491 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-547491 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (36.642176288s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (36.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1002 20:52:51.914163 2785630 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1002 20:52:51.914247 2785630 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-2783765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-547491
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-547491: exit status 85 (86.573659ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-547491 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-547491 │ jenkins │ v1.37.0 │ 02 Oct 25 20:52 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:52:15
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:52:15.316992 2785635 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:52:15.317124 2785635 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:52:15.317136 2785635 out.go:374] Setting ErrFile to fd 2...
	I1002 20:52:15.317141 2785635 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:52:15.317424 2785635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-2783765/.minikube/bin
	W1002 20:52:15.317564 2785635 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21682-2783765/.minikube/config/config.json: open /home/jenkins/minikube-integration/21682-2783765/.minikube/config/config.json: no such file or directory
	I1002 20:52:15.317949 2785635 out.go:368] Setting JSON to true
	I1002 20:52:15.318788 2785635 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":59685,"bootTime":1759378651,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1002 20:52:15.318857 2785635 start.go:140] virtualization:  
	I1002 20:52:15.322765 2785635 out.go:99] [download-only-547491] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1002 20:52:15.322913 2785635 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21682-2783765/.minikube/cache/preloaded-tarball: no such file or directory
	I1002 20:52:15.323036 2785635 notify.go:220] Checking for updates...
	I1002 20:52:15.326036 2785635 out.go:171] MINIKUBE_LOCATION=21682
	I1002 20:52:15.329102 2785635 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:52:15.332109 2785635 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21682-2783765/kubeconfig
	I1002 20:52:15.335050 2785635 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-2783765/.minikube
	I1002 20:52:15.337914 2785635 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1002 20:52:15.343438 2785635 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 20:52:15.343741 2785635 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 20:52:15.377074 2785635 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 20:52:15.377225 2785635 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:52:15.435260 2785635 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-02 20:52:15.425964339 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:52:15.435457 2785635 docker.go:318] overlay module found
	I1002 20:52:15.438459 2785635 out.go:99] Using the docker driver based on user configuration
	I1002 20:52:15.438500 2785635 start.go:304] selected driver: docker
	I1002 20:52:15.438513 2785635 start.go:924] validating driver "docker" against <nil>
	I1002 20:52:15.438622 2785635 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:52:15.491448 2785635 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-02 20:52:15.48245704 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:52:15.491601 2785635 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:52:15.491899 2785635 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1002 20:52:15.492058 2785635 start_flags.go:984] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 20:52:15.495051 2785635 out.go:171] Using Docker driver with root privileges
	I1002 20:52:15.498052 2785635 cni.go:84] Creating CNI manager for ""
	I1002 20:52:15.498123 2785635 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 20:52:15.498138 2785635 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 20:52:15.498225 2785635 start.go:348] cluster config:
	{Name:download-only-547491 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-547491 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:52:15.501211 2785635 out.go:99] Starting "download-only-547491" primary control-plane node in "download-only-547491" cluster
	I1002 20:52:15.501233 2785635 cache.go:123] Beginning downloading kic base image for docker with containerd
	I1002 20:52:15.504036 2785635 out.go:99] Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:52:15.504064 2785635 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1002 20:52:15.504168 2785635 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:52:15.520038 2785635 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 20:52:15.521009 2785635 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 20:52:15.521123 2785635 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 20:52:15.565593 2785635 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1002 20:52:15.565622 2785635 cache.go:58] Caching tarball of preloaded images
	I1002 20:52:15.566432 2785635 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1002 20:52:15.569800 2785635 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1002 20:52:15.569827 2785635 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1002 20:52:15.655151 2785635 preload.go:290] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1002 20:52:15.655305 2785635 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/21682-2783765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1002 20:52:21.002538 2785635 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	
	
	* The control-plane node download-only-547491 host does not exist
	  To start a cluster, run: "minikube start -p download-only-547491"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-547491
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (30.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-953319 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-953319 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (30.788211599s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (30.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1002 20:53:23.148503 2785630 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1002 20:53:23.148543 2785630 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-2783765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-953319
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-953319: exit status 85 (91.491061ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-547491 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-547491 │ jenkins │ v1.37.0 │ 02 Oct 25 20:52 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 02 Oct 25 20:52 UTC │ 02 Oct 25 20:52 UTC │
	│ delete  │ -p download-only-547491                                                                                                                                                               │ download-only-547491 │ jenkins │ v1.37.0 │ 02 Oct 25 20:52 UTC │ 02 Oct 25 20:52 UTC │
	│ start   │ -o=json --download-only -p download-only-953319 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-953319 │ jenkins │ v1.37.0 │ 02 Oct 25 20:52 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:52:52
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:52:52.405118 2785836 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:52:52.405307 2785836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:52:52.405315 2785836 out.go:374] Setting ErrFile to fd 2...
	I1002 20:52:52.405320 2785836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:52:52.405585 2785836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-2783765/.minikube/bin
	I1002 20:52:52.406012 2785836 out.go:368] Setting JSON to true
	I1002 20:52:52.406953 2785836 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":59722,"bootTime":1759378651,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1002 20:52:52.407027 2785836 start.go:140] virtualization:  
	I1002 20:52:52.410463 2785836 out.go:99] [download-only-953319] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 20:52:52.410699 2785836 notify.go:220] Checking for updates...
	I1002 20:52:52.413600 2785836 out.go:171] MINIKUBE_LOCATION=21682
	I1002 20:52:52.416568 2785836 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:52:52.419643 2785836 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21682-2783765/kubeconfig
	I1002 20:52:52.422541 2785836 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-2783765/.minikube
	I1002 20:52:52.425544 2785836 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1002 20:52:52.431366 2785836 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 20:52:52.431719 2785836 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 20:52:52.465647 2785836 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 20:52:52.465779 2785836 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:52:52.521132 2785836 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-02 20:52:52.511869686 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:52:52.521244 2785836 docker.go:318] overlay module found
	I1002 20:52:52.524322 2785836 out.go:99] Using the docker driver based on user configuration
	I1002 20:52:52.524368 2785836 start.go:304] selected driver: docker
	I1002 20:52:52.524374 2785836 start.go:924] validating driver "docker" against <nil>
	I1002 20:52:52.524478 2785836 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:52:52.580921 2785836 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-02 20:52:52.57178452 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:52:52.581080 2785836 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:52:52.581380 2785836 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1002 20:52:52.581544 2785836 start_flags.go:984] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 20:52:52.584669 2785836 out.go:171] Using Docker driver with root privileges
	I1002 20:52:52.587459 2785836 cni.go:84] Creating CNI manager for ""
	I1002 20:52:52.587537 2785836 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 20:52:52.587552 2785836 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 20:52:52.587633 2785836 start.go:348] cluster config:
	{Name:download-only-953319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-953319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:52:52.590488 2785836 out.go:99] Starting "download-only-953319" primary control-plane node in "download-only-953319" cluster
	I1002 20:52:52.590510 2785836 cache.go:123] Beginning downloading kic base image for docker with containerd
	I1002 20:52:52.593314 2785836 out.go:99] Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:52:52.593357 2785836 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1002 20:52:52.593528 2785836 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:52:52.608888 2785836 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 20:52:52.609015 2785836 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 20:52:52.609039 2785836 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1002 20:52:52.609044 2785836 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1002 20:52:52.609052 2785836 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1002 20:52:52.651458 2785836 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1002 20:52:52.651488 2785836 cache.go:58] Caching tarball of preloaded images
	I1002 20:52:52.651656 2785836 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1002 20:52:52.654868 2785836 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1002 20:52:52.654900 2785836 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1002 20:52:52.742745 2785836 preload.go:290] Got checksum from GCS API "435977642a202d20ca04f26d87d875a8"
	I1002 20:52:52.742823 2785836 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:435977642a202d20ca04f26d87d875a8 -> /home/jenkins/minikube-integration/21682-2783765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1002 20:53:22.257738 2785836 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1002 20:53:22.258204 2785836 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/download-only-953319/config.json ...
	I1002 20:53:22.258241 2785836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/download-only-953319/config.json: {Name:mk668904a22a186262db434384da0a6b174298d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:53:22.258506 2785836 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1002 20:53:22.258719 2785836 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21682-2783765/.minikube/cache/linux/arm64/v1.34.1/kubectl
	
	
	* The control-plane node download-only-953319 host does not exist
	  To start a cluster, run: "minikube start -p download-only-953319"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-953319
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I1002 20:53:24.327541 2785630 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-069815 --alsologtostderr --binary-mirror http://127.0.0.1:36151 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-069815" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-069815
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-774992
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-774992: exit status 85 (76.223572ms)

                                                
                                                
-- stdout --
	* Profile "addons-774992" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-774992"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-774992
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-774992: exit status 85 (71.33014ms)

                                                
                                                
-- stdout --
	* Profile "addons-774992" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-774992"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (159.07s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-774992 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-774992 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m39.064303555s)
--- PASS: TestAddons/Setup (159.07s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.76s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 74.943255ms
addons_test.go:876: volcano-admission stabilized in 75.905216ms
addons_test.go:868: volcano-scheduler stabilized in 76.148618ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-pz6cj" [99f2e448-6c98-4445-b76e-ff5bba1449ba] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004172043s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-96hn2" [a9494468-9bd5-40a2-b38b-2abd4a4a6021] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004105305s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-c9pcw" [ebf90ac1-8d4d-4c12-b7de-69699b43b862] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003107282s
addons_test.go:903: (dbg) Run:  kubectl --context addons-774992 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-774992 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-774992 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [11a681f3-f216-4754-9446-54864fcd31b4] Pending
helpers_test.go:352: "test-job-nginx-0" [11a681f3-f216-4754-9446-54864fcd31b4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [11a681f3-f216-4754-9446-54864fcd31b4] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003893176s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-774992 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-774992 addons disable volcano --alsologtostderr -v=1: (12.031225471s)
--- PASS: TestAddons/serial/Volcano (41.76s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.23s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-774992 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-774992 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.23s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.9s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-774992 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-774992 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [0522f3e8-dae0-498d-9e09-45f439551211] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [0522f3e8-dae0-498d-9e09-45f439551211] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003513406s
addons_test.go:694: (dbg) Run:  kubectl --context addons-774992 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-774992 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-774992 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-774992 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.90s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.11s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.3059ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-jz5xt" [2c3f7ba8-73b1-4944-aecc-f5f08de39e60] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004656265s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-jwwwm" [94155ff8-fe7f-4e8a-b214-7c48065126c8] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003996647s
addons_test.go:392: (dbg) Run:  kubectl --context addons-774992 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-774992 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-774992 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.011714657s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-774992 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-774992 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.11s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.75s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.643583ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-774992
addons_test.go:332: (dbg) Run:  kubectl --context addons-774992 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-774992 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.75s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-774992 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-774992 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-774992 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [eb129a84-5543-4209-ac23-91bfe87ce2f3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [eb129a84-5543-4209-ac23-91bfe87ce2f3] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003426842s
I1002 20:58:33.298150 2785630 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-774992 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-774992 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-774992 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-774992 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-774992 addons disable ingress-dns --alsologtostderr -v=1: (1.329851627s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-774992 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-774992 addons disable ingress --alsologtostderr -v=1: (7.859478242s)
--- PASS: TestAddons/parallel/Ingress (18.83s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-xx2bx" [507a3b5c-7bda-41c9-929c-3a4d5858aa9a] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004133769s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-774992 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.30s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.84s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.581625ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-zwznr" [004de550-5d20-47fd-96b7-dbbcb6ab36e0] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004785311s
addons_test.go:463: (dbg) Run:  kubectl --context addons-774992 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-774992 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.84s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1002 20:57:16.213374 2785630 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1002 20:57:16.217983 2785630 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1002 20:57:16.218012 2785630 kapi.go:107] duration metric: took 7.786179ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 7.798512ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-774992 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774992 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774992 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774992 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774992 get pvc hpvc -o jsonpath={.status.phase} -n default
2025/10/02 20:57:19 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774992 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774992 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774992 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-774992 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [4f8dea0e-3fd6-4938-b551-6bd24e19de54] Pending
helpers_test.go:352: "task-pv-pod" [4f8dea0e-3fd6-4938-b551-6bd24e19de54] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [4f8dea0e-3fd6-4938-b551-6bd24e19de54] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003847819s
addons_test.go:572: (dbg) Run:  kubectl --context addons-774992 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-774992 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-774992 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-774992 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-774992 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-774992 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774992 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774992 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774992 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774992 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774992 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774992 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774992 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774992 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774992 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774992 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774992 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774992 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774992 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774992 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774992 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774992 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774992 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774992 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-774992 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [69d67d70-0748-45b0-a4cd-600e31789c2b] Pending
helpers_test.go:352: "task-pv-pod-restore" [69d67d70-0748-45b0-a4cd-600e31789c2b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [69d67d70-0748-45b0-a4cd-600e31789c2b] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003337531s
addons_test.go:614: (dbg) Run:  kubectl --context addons-774992 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-774992 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-774992 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-774992 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-774992 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-774992 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.858841327s)
--- PASS: TestAddons/parallel/CSI (49.63s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-774992 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-swn8s" [590e3dec-718d-44aa-8508-911ed55f3dd0] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-swn8s" [590e3dec-718d-44aa-8508-911ed55f3dd0] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003534187s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-774992 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (12.30s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-fd6d6" [ca31c5a0-c28f-4ff3-a220-59fdff31d274] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003058836s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-774992 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.38s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-774992 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-774992 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774992 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774992 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774992 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774992 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774992 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [5fca59af-aa9d-40b5-8808-9c49d56f7b5b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [5fca59af-aa9d-40b5-8808-9c49d56f7b5b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [5fca59af-aa9d-40b5-8808-9c49d56f7b5b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.00369049s
addons_test.go:967: (dbg) Run:  kubectl --context addons-774992 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-774992 ssh "cat /opt/local-path-provisioner/pvc-a72c6780-abc2-4dc3-9d6e-db75a010a533_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-774992 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-774992 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-774992 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-774992 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.045784488s)
--- PASS: TestAddons/parallel/LocalPath (51.38s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-9lpmt" [325997ae-62f2-482e-8946-86d900683528] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003643117s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-774992 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-2pgk6" [870ff201-94f9-4cfe-951d-a6b000404629] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004265661s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-774992 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-774992 addons disable yakd --alsologtostderr -v=1: (5.964642403s)
--- PASS: TestAddons/parallel/Yakd (11.97s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.33s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-774992
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-774992: (12.033366217s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-774992
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-774992
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-774992
--- PASS: TestAddons/StoppedEnableDisable (12.33s)

                                                
                                    
x
+
TestCertOptions (36.97s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-137159 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-137159 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (34.25402873s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-137159 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-137159 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-137159 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-137159" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-137159
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-137159: (1.947220979s)
--- PASS: TestCertOptions (36.97s)

                                                
                                    
x
+
TestCertExpiration (235.3s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-494528 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-494528 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (45.666115143s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-494528 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-494528 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.323951329s)
helpers_test.go:175: Cleaning up "cert-expiration-494528" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-494528
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-494528: (2.31306195s)
--- PASS: TestCertExpiration (235.30s)

                                                
                                    
x
+
TestForceSystemdFlag (46.6s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-630218 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-630218 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (43.242867299s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-630218 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-630218" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-630218
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-630218: (2.966878371s)
--- PASS: TestForceSystemdFlag (46.60s)

                                                
                                    
x
+
TestForceSystemdEnv (50.11s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-378690 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-378690 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (47.329891939s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-378690 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-378690" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-378690
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-378690: (2.222145817s)
--- PASS: TestForceSystemdEnv (50.11s)

                                                
                                    
x
+
TestErrorSpam/setup (33.17s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-798504 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-798504 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-798504 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-798504 --driver=docker  --container-runtime=containerd: (33.165410753s)
--- PASS: TestErrorSpam/setup (33.17s)

                                                
                                    
x
+
TestErrorSpam/start (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-798504 --log_dir /tmp/nospam-798504 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-798504 --log_dir /tmp/nospam-798504 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-798504 --log_dir /tmp/nospam-798504 start --dry-run
--- PASS: TestErrorSpam/start (0.84s)

                                                
                                    
x
+
TestErrorSpam/status (1.13s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-798504 --log_dir /tmp/nospam-798504 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-798504 --log_dir /tmp/nospam-798504 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-798504 --log_dir /tmp/nospam-798504 status
--- PASS: TestErrorSpam/status (1.13s)

                                                
                                    
x
+
TestErrorSpam/pause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-798504 --log_dir /tmp/nospam-798504 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-798504 --log_dir /tmp/nospam-798504 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-798504 --log_dir /tmp/nospam-798504 pause
--- PASS: TestErrorSpam/pause (1.73s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.9s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-798504 --log_dir /tmp/nospam-798504 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-798504 --log_dir /tmp/nospam-798504 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-798504 --log_dir /tmp/nospam-798504 unpause
--- PASS: TestErrorSpam/unpause (1.90s)

                                                
                                    
x
+
TestErrorSpam/stop (12.17s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-798504 --log_dir /tmp/nospam-798504 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-798504 --log_dir /tmp/nospam-798504 stop: (11.967500517s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-798504 --log_dir /tmp/nospam-798504 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-798504 --log_dir /tmp/nospam-798504 stop
--- PASS: TestErrorSpam/stop (12.17s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21682-2783765/.minikube/files/etc/test/nested/copy/2785630/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (76.43s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-029371 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1002 21:01:04.105937 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:01:04.112712 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:01:04.124107 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:01:04.145543 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:01:04.186862 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:01:04.268252 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:01:04.429775 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:01:04.751434 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:01:05.393442 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:01:06.674823 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:01:09.236737 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:01:14.358062 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:01:24.599504 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:01:45.082910 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-029371 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m16.433465645s)
--- PASS: TestFunctional/serial/StartWithProxy (76.43s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.45s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1002 21:02:03.052163 2785630 config.go:182] Loaded profile config "functional-029371": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-029371 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-029371 --alsologtostderr -v=8: (7.448103398s)
functional_test.go:678: soft start took 7.45152794s for "functional-029371" cluster.
I1002 21:02:10.501218 2785630 config.go:182] Loaded profile config "functional-029371": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (7.45s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-029371 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-029371 cache add registry.k8s.io/pause:3.1: (1.314947277s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-029371 cache add registry.k8s.io/pause:3.3: (1.201615472s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-029371 cache add registry.k8s.io/pause:latest: (1.122436484s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-029371 /tmp/TestFunctionalserialCacheCmdcacheadd_local459566965/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 cache add minikube-local-cache-test:functional-029371
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 cache delete minikube-local-cache-test:functional-029371
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-029371
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-029371 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (284.226952ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 kubectl -- --context functional-029371 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-029371 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.22s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-029371 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1002 21:02:26.044972 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-029371 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.223338666s)
functional_test.go:776: restart took 42.223456633s for "functional-029371" cluster.
I1002 21:03:00.502704 2785630 config.go:182] Loaded profile config "functional-029371": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (42.22s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-029371 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-029371 logs: (1.501395543s)
--- PASS: TestFunctional/serial/LogsCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 logs --file /tmp/TestFunctionalserialLogsFileCmd1941604759/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-029371 logs --file /tmp/TestFunctionalserialLogsFileCmd1941604759/001/logs.txt: (1.482618018s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.58s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-029371 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-029371
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-029371: exit status 115 (685.391142ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31427 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-029371 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.58s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-029371 config get cpus: exit status 14 (93.765661ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-029371 config get cpus: exit status 14 (84.500233ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-029371 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-029371 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (250.834737ms)

                                                
                                                
-- stdout --
	* [functional-029371] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-2783765/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-2783765/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:13:25.171921 2823030 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:13:25.172150 2823030 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:13:25.172179 2823030 out.go:374] Setting ErrFile to fd 2...
	I1002 21:13:25.172200 2823030 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:13:25.172475 2823030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-2783765/.minikube/bin
	I1002 21:13:25.172984 2823030 out.go:368] Setting JSON to false
	I1002 21:13:25.174010 2823030 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":60955,"bootTime":1759378651,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1002 21:13:25.174114 2823030 start.go:140] virtualization:  
	I1002 21:13:25.182379 2823030 out.go:179] * [functional-029371] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:13:25.185353 2823030 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:13:25.185461 2823030 notify.go:220] Checking for updates...
	I1002 21:13:25.191105 2823030 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:13:25.193895 2823030 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-2783765/kubeconfig
	I1002 21:13:25.196885 2823030 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-2783765/.minikube
	I1002 21:13:25.199622 2823030 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:13:25.202490 2823030 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:13:25.205973 2823030 config.go:182] Loaded profile config "functional-029371": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 21:13:25.206632 2823030 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:13:25.252684 2823030 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:13:25.252795 2823030 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:13:25.336472 2823030 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 21:13:25.325126336 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:13:25.336575 2823030 docker.go:318] overlay module found
	I1002 21:13:25.339675 2823030 out.go:179] * Using the docker driver based on existing profile
	I1002 21:13:25.342507 2823030 start.go:304] selected driver: docker
	I1002 21:13:25.342525 2823030 start.go:924] validating driver "docker" against &{Name:functional-029371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-029371 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:13:25.342644 2823030 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:13:25.346261 2823030 out.go:203] 
	W1002 21:13:25.349080 2823030 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1002 21:13:25.351896 2823030 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-029371 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-029371 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-029371 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (260.24216ms)

                                                
                                                
-- stdout --
	* [functional-029371] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-2783765/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-2783765/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:13:24.932350 2822945 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:13:24.932678 2822945 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:13:24.932686 2822945 out.go:374] Setting ErrFile to fd 2...
	I1002 21:13:24.932691 2822945 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:13:24.933709 2822945 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-2783765/.minikube/bin
	I1002 21:13:24.934112 2822945 out.go:368] Setting JSON to false
	I1002 21:13:24.935120 2822945 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":60954,"bootTime":1759378651,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1002 21:13:24.935182 2822945 start.go:140] virtualization:  
	I1002 21:13:24.938610 2822945 out.go:179] * [functional-029371] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1002 21:13:24.942400 2822945 notify.go:220] Checking for updates...
	I1002 21:13:24.946585 2822945 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:13:24.949631 2822945 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:13:24.952975 2822945 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-2783765/kubeconfig
	I1002 21:13:24.956307 2822945 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-2783765/.minikube
	I1002 21:13:24.959176 2822945 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:13:24.961913 2822945 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:13:24.965408 2822945 config.go:182] Loaded profile config "functional-029371": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 21:13:24.965987 2822945 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:13:24.999475 2822945 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:13:24.999596 2822945 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:13:25.084646 2822945 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 21:13:25.073305252 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:13:25.084748 2822945 docker.go:318] overlay module found
	I1002 21:13:25.087917 2822945 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1002 21:13:25.090729 2822945 start.go:304] selected driver: docker
	I1002 21:13:25.090753 2822945 start.go:924] validating driver "docker" against &{Name:functional-029371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-029371 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:13:25.090857 2822945 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:13:25.094657 2822945 out.go:203] 
	W1002 21:13:25.097580 2822945 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 21:13:25.100490 2822945 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh -n functional-029371 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 cp functional-029371:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd315440799/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh -n functional-029371 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh -n functional-029371 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/2785630/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh "sudo cat /etc/test/nested/copy/2785630/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/2785630.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh "sudo cat /etc/ssl/certs/2785630.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/2785630.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh "sudo cat /usr/share/ca-certificates/2785630.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/27856302.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh "sudo cat /etc/ssl/certs/27856302.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/27856302.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh "sudo cat /usr/share/ca-certificates/27856302.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-029371 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-029371 ssh "sudo systemctl is-active docker": exit status 1 (379.899721ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-029371 ssh "sudo systemctl is-active crio": exit status 1 (413.34135ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-029371 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-029371 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-029371 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-029371 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 2818612: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-029371 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-029371 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [430444f4-c24b-4c2f-b4fa-db18df199ee0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [430444f4-c24b-4c2f-b4fa-db18df199ee0] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003803775s
I1002 21:03:18.567874 2785630 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-029371 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.142.42 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-029371 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (350.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-029371 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-029371 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-jvqz4" [c6ee5f62-076c-459f-91bf-59a51539e968] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1002 21:11:04.105030 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "hello-node-75c85bcc94-jvqz4" [c6ee5f62-076c-459f-91bf-59a51539e968] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 5m50.003413908s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (350.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 service list -o json
functional_test.go:1504: Took "528.109728ms" to run "out/minikube-linux-arm64 -p functional-029371 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30979
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30979
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "362.281369ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "61.214527ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "362.265821ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "55.672759ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-029371 /tmp/TestFunctionalparallelMountCmdany-port1095637664/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759439592859047824" to /tmp/TestFunctionalparallelMountCmdany-port1095637664/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759439592859047824" to /tmp/TestFunctionalparallelMountCmdany-port1095637664/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759439592859047824" to /tmp/TestFunctionalparallelMountCmdany-port1095637664/001/test-1759439592859047824
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-029371 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (343.458499ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 21:13:13.203761 2785630 retry.go:31] will retry after 641.319425ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  2 21:13 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  2 21:13 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  2 21:13 test-1759439592859047824
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh cat /mount-9p/test-1759439592859047824
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-029371 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [fbad512e-5bd9-4710-9776-20f6e7bd3473] Pending
helpers_test.go:352: "busybox-mount" [fbad512e-5bd9-4710-9776-20f6e7bd3473] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [fbad512e-5bd9-4710-9776-20f6e7bd3473] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [fbad512e-5bd9-4710-9776-20f6e7bd3473] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.007956748s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-029371 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-029371 /tmp/TestFunctionalparallelMountCmdany-port1095637664/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-029371 /tmp/TestFunctionalparallelMountCmdspecific-port3660542288/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-029371 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (493.682162ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 21:13:21.689970 2785630 retry.go:31] will retry after 346.967552ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-029371 /tmp/TestFunctionalparallelMountCmdspecific-port3660542288/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-029371 ssh "sudo umount -f /mount-9p": exit status 1 (438.57943ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-029371 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-029371 /tmp/TestFunctionalparallelMountCmdspecific-port3660542288/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-029371 /tmp/TestFunctionalparallelMountCmdVerifyCleanup443465650/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-029371 /tmp/TestFunctionalparallelMountCmdVerifyCleanup443465650/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-029371 /tmp/TestFunctionalparallelMountCmdVerifyCleanup443465650/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-029371 ssh "findmnt -T" /mount1: exit status 1 (681.051693ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 21:13:24.182196 2785630 retry.go:31] will retry after 521.689977ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-029371 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-029371 /tmp/TestFunctionalparallelMountCmdVerifyCleanup443465650/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-029371 /tmp/TestFunctionalparallelMountCmdVerifyCleanup443465650/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-029371 /tmp/TestFunctionalparallelMountCmdVerifyCleanup443465650/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.37s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-029371 version -o=json --components: (1.16355477s)
--- PASS: TestFunctional/parallel/Version/components (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-029371 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-029371
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-029371
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-029371 image ls --format short --alsologtostderr:
I1002 21:13:36.416543 2824890 out.go:360] Setting OutFile to fd 1 ...
I1002 21:13:36.416657 2824890 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:13:36.416668 2824890 out.go:374] Setting ErrFile to fd 2...
I1002 21:13:36.416673 2824890 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:13:36.416913 2824890 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-2783765/.minikube/bin
I1002 21:13:36.417522 2824890 config.go:182] Loaded profile config "functional-029371": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 21:13:36.417633 2824890 config.go:182] Loaded profile config "functional-029371": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 21:13:36.418097 2824890 cli_runner.go:164] Run: docker container inspect functional-029371 --format={{.State.Status}}
I1002 21:13:36.435870 2824890 ssh_runner.go:195] Run: systemctl --version
I1002 21:13:36.435928 2824890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-029371
I1002 21:13:36.452779 2824890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36127 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/functional-029371/id_rsa Username:docker}
I1002 21:13:36.546256 2824890 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-029371 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:a18947 │ 98.2MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:b5f57e │ 15.8MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ localhost/my-image                          │ functional-029371  │ sha256:b21548 │ 831kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:43911e │ 24.6MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:05baa9 │ 22.8MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ docker.io/library/minikube-local-cache-test │ functional-029371  │ sha256:e590d7 │ 991B   │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:7eb2c6 │ 20.7MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ docker.io/kicbase/echo-server               │ functional-029371  │ sha256:ce2d2c │ 2.17MB │
│ docker.io/library/nginx                     │ alpine             │ sha256:35f3cb │ 22.9MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-029371 image ls --format table --alsologtostderr:
I1002 21:13:40.615730 2825249 out.go:360] Setting OutFile to fd 1 ...
I1002 21:13:40.615946 2825249 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:13:40.615958 2825249 out.go:374] Setting ErrFile to fd 2...
I1002 21:13:40.615968 2825249 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:13:40.616270 2825249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-2783765/.minikube/bin
I1002 21:13:40.617010 2825249 config.go:182] Loaded profile config "functional-029371": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 21:13:40.617197 2825249 config.go:182] Loaded profile config "functional-029371": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 21:13:40.617737 2825249 cli_runner.go:164] Run: docker container inspect functional-029371 --format={{.State.Status}}
I1002 21:13:40.636085 2825249 ssh_runner.go:195] Run: systemctl --version
I1002 21:13:40.636145 2825249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-029371
I1002 21:13:40.654048 2825249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36127 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/functional-029371/id_rsa Username:docker}
I1002 21:13:40.770820 2825249 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-029371 image ls --format json --alsologtostderr:
[{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"24571109"},{"id":"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1
ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"22788047"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-029371"],"size":"2173567"},{"id":"sha256:35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936","repoDigests":["docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8"],"repoTags":["docker.io/library/nginx:alpine"],"size":"22948447"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:b215485067f22f9f819c36d779006cfede73584e38706c55a94408676bb8ae89","repoDigests":[],"repoTags":["localhost/my-image:functional-029371"],"size":"830618"},{"id":"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef
1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"20720058"},{"id":"sha256:e590d7b9211550f3691c21e867863d59466fb70864b58ac5c230965724a9fa9e","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-029371"],"size":"991"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:a18
94772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"98207481"},{"id":"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"15779817"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-029371 image ls --format json --alsologtostderr:
I1002 21:13:40.398672 2825211 out.go:360] Setting OutFile to fd 1 ...
I1002 21:13:40.398835 2825211 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:13:40.398919 2825211 out.go:374] Setting ErrFile to fd 2...
I1002 21:13:40.398935 2825211 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:13:40.399344 2825211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-2783765/.minikube/bin
I1002 21:13:40.400325 2825211 config.go:182] Loaded profile config "functional-029371": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 21:13:40.400484 2825211 config.go:182] Loaded profile config "functional-029371": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 21:13:40.401147 2825211 cli_runner.go:164] Run: docker container inspect functional-029371 --format={{.State.Status}}
I1002 21:13:40.419513 2825211 ssh_runner.go:195] Run: systemctl --version
I1002 21:13:40.419575 2825211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-029371
I1002 21:13:40.436433 2825211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36127 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/functional-029371/id_rsa Username:docker}
I1002 21:13:40.534031 2825211 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-029371 image ls --format yaml --alsologtostderr:
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "98207481"
- id: sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "15779817"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936
repoDigests:
- docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8
repoTags:
- docker.io/library/nginx:alpine
size: "22948447"
- id: sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "20720058"
- id: sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "22788047"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-029371
size: "2173567"
- id: sha256:e590d7b9211550f3691c21e867863d59466fb70864b58ac5c230965724a9fa9e
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-029371
size: "991"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "24571109"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-029371 image ls --format yaml --alsologtostderr:
I1002 21:13:36.629675 2824927 out.go:360] Setting OutFile to fd 1 ...
I1002 21:13:36.629809 2824927 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:13:36.629820 2824927 out.go:374] Setting ErrFile to fd 2...
I1002 21:13:36.629824 2824927 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:13:36.630061 2824927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-2783765/.minikube/bin
I1002 21:13:36.630733 2824927 config.go:182] Loaded profile config "functional-029371": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 21:13:36.630890 2824927 config.go:182] Loaded profile config "functional-029371": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 21:13:36.631489 2824927 cli_runner.go:164] Run: docker container inspect functional-029371 --format={{.State.Status}}
I1002 21:13:36.649252 2824927 ssh_runner.go:195] Run: systemctl --version
I1002 21:13:36.649308 2824927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-029371
I1002 21:13:36.666736 2824927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36127 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/functional-029371/id_rsa Username:docker}
I1002 21:13:36.761845 2824927 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-029371 ssh pgrep buildkitd: exit status 1 (272.972983ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 image build -t localhost/my-image:functional-029371 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-029371 image build -t localhost/my-image:functional-029371 testdata/build --alsologtostderr: (3.043757842s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-029371 image build -t localhost/my-image:functional-029371 testdata/build --alsologtostderr:
I1002 21:13:37.118134 2825024 out.go:360] Setting OutFile to fd 1 ...
I1002 21:13:37.119985 2825024 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:13:37.120048 2825024 out.go:374] Setting ErrFile to fd 2...
I1002 21:13:37.120068 2825024 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:13:37.120437 2825024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-2783765/.minikube/bin
I1002 21:13:37.121289 2825024 config.go:182] Loaded profile config "functional-029371": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 21:13:37.123544 2825024 config.go:182] Loaded profile config "functional-029371": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 21:13:37.124082 2825024 cli_runner.go:164] Run: docker container inspect functional-029371 --format={{.State.Status}}
I1002 21:13:37.143229 2825024 ssh_runner.go:195] Run: systemctl --version
I1002 21:13:37.143341 2825024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-029371
I1002 21:13:37.160310 2825024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36127 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/functional-029371/id_rsa Username:docker}
I1002 21:13:37.254228 2825024 build_images.go:161] Building image from path: /tmp/build.3947152942.tar
I1002 21:13:37.254301 2825024 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1002 21:13:37.262768 2825024 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3947152942.tar
I1002 21:13:37.266852 2825024 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3947152942.tar: stat -c "%s %y" /var/lib/minikube/build/build.3947152942.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3947152942.tar': No such file or directory
I1002 21:13:37.266883 2825024 ssh_runner.go:362] scp /tmp/build.3947152942.tar --> /var/lib/minikube/build/build.3947152942.tar (3072 bytes)
I1002 21:13:37.285958 2825024 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3947152942
I1002 21:13:37.294008 2825024 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3947152942 -xf /var/lib/minikube/build/build.3947152942.tar
I1002 21:13:37.302451 2825024 containerd.go:394] Building image: /var/lib/minikube/build/build.3947152942
I1002 21:13:37.302546 2825024 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3947152942 --local dockerfile=/var/lib/minikube/build/build.3947152942 --output type=image,name=localhost/my-image:functional-029371
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:be1c362746b37a2d1f462e761373c626baba297dab3463b0179ef6d56b8e1287
#8 exporting manifest sha256:be1c362746b37a2d1f462e761373c626baba297dab3463b0179ef6d56b8e1287 0.0s done
#8 exporting config sha256:b215485067f22f9f819c36d779006cfede73584e38706c55a94408676bb8ae89 0.0s done
#8 naming to localhost/my-image:functional-029371 done
#8 DONE 0.2s
I1002 21:13:40.088485 2825024 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3947152942 --local dockerfile=/var/lib/minikube/build/build.3947152942 --output type=image,name=localhost/my-image:functional-029371: (2.785904095s)
I1002 21:13:40.088569 2825024 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3947152942
I1002 21:13:40.097474 2825024 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3947152942.tar
I1002 21:13:40.106765 2825024 build_images.go:217] Built localhost/my-image:functional-029371 from /tmp/build.3947152942.tar
I1002 21:13:40.106799 2825024 build_images.go:133] succeeded building to: functional-029371
I1002 21:13:40.106805 2825024 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-029371
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 image load --daemon kicbase/echo-server:functional-029371 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 image load --daemon kicbase/echo-server:functional-029371 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-029371
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 image load --daemon kicbase/echo-server:functional-029371 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 image save kicbase/echo-server:functional-029371 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 image rm kicbase/echo-server:functional-029371 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-029371
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 image save --daemon kicbase/echo-server:functional-029371 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-029371
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 update-context --alsologtostderr -v=2
E1002 21:16:04.104882 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:17:27.169594 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-029371 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-029371
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-029371
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-029371
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (216.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1002 21:21:04.105194 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-537012 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (3m35.78767922s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (216.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (50.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-537012 kubectl -- rollout status deployment/busybox: (5.032558903s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2 10.244.1.3 10.244.2.2'\n\n-- /stdout --"
I1002 21:22:13.089203 2785630 retry.go:31] will retry after 767.070692ms: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2 10.244.1.3 10.244.2.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2 10.244.1.3 10.244.2.2'\n\n-- /stdout --"
I1002 21:22:14.044222 2785630 retry.go:31] will retry after 935.074333ms: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2 10.244.1.3 10.244.2.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2 10.244.1.3 10.244.2.2'\n\n-- /stdout --"
I1002 21:22:15.165803 2785630 retry.go:31] will retry after 2.229932566s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2 10.244.1.3 10.244.2.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2 10.244.1.3 10.244.2.2'\n\n-- /stdout --"
I1002 21:22:17.585390 2785630 retry.go:31] will retry after 3.748009974s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2 10.244.1.3 10.244.2.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2 10.244.1.3 10.244.2.2'\n\n-- /stdout --"
I1002 21:22:21.512114 2785630 retry.go:31] will retry after 4.901583521s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2 10.244.1.3 10.244.2.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2 10.244.1.3 10.244.2.2'\n\n-- /stdout --"
I1002 21:22:26.585981 2785630 retry.go:31] will retry after 6.967988268s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2 10.244.1.3 10.244.2.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2 10.244.1.3 10.244.2.2'\n\n-- /stdout --"
I1002 21:22:33.706487 2785630 retry.go:31] will retry after 7.923690946s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2 10.244.1.3 10.244.2.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2 10.244.1.3 10.244.2.2'\n\n-- /stdout --"
I1002 21:22:41.812031 2785630 retry.go:31] will retry after 13.363272797s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2 10.244.1.3 10.244.2.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 kubectl -- exec busybox-7b57f96db7-57h8l -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 kubectl -- exec busybox-7b57f96db7-l7bmn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 kubectl -- exec busybox-7b57f96db7-vs8px -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 kubectl -- exec busybox-7b57f96db7-57h8l -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 kubectl -- exec busybox-7b57f96db7-l7bmn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 kubectl -- exec busybox-7b57f96db7-vs8px -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 kubectl -- exec busybox-7b57f96db7-57h8l -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 kubectl -- exec busybox-7b57f96db7-l7bmn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 kubectl -- exec busybox-7b57f96db7-vs8px -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (50.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 kubectl -- exec busybox-7b57f96db7-57h8l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 kubectl -- exec busybox-7b57f96db7-57h8l -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 kubectl -- exec busybox-7b57f96db7-l7bmn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 kubectl -- exec busybox-7b57f96db7-l7bmn -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 kubectl -- exec busybox-7b57f96db7-vs8px -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 kubectl -- exec busybox-7b57f96db7-vs8px -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (60.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 node add --alsologtostderr -v 5
E1002 21:23:10.105630 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:23:10.112056 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:23:10.123814 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:23:10.145242 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:23:10.186612 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:23:10.271491 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:23:10.433575 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:23:10.755724 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:23:11.397631 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:23:12.679551 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:23:15.240824 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:23:20.362399 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:23:30.604558 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:23:51.086815 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-537012 node add --alsologtostderr -v 5: (59.608761003s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-537012 status --alsologtostderr -v 5: (1.338403312s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (60.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-537012 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.107861334s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-537012 status --output json --alsologtostderr -v 5: (1.083411969s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 cp testdata/cp-test.txt ha-537012:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 cp ha-537012:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile691916521/001/cp-test_ha-537012.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 cp ha-537012:/home/docker/cp-test.txt ha-537012-m02:/home/docker/cp-test_ha-537012_ha-537012-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012-m02 "sudo cat /home/docker/cp-test_ha-537012_ha-537012-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 cp ha-537012:/home/docker/cp-test.txt ha-537012-m03:/home/docker/cp-test_ha-537012_ha-537012-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012-m03 "sudo cat /home/docker/cp-test_ha-537012_ha-537012-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 cp ha-537012:/home/docker/cp-test.txt ha-537012-m04:/home/docker/cp-test_ha-537012_ha-537012-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012-m04 "sudo cat /home/docker/cp-test_ha-537012_ha-537012-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 cp testdata/cp-test.txt ha-537012-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 cp ha-537012-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile691916521/001/cp-test_ha-537012-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 cp ha-537012-m02:/home/docker/cp-test.txt ha-537012:/home/docker/cp-test_ha-537012-m02_ha-537012.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012 "sudo cat /home/docker/cp-test_ha-537012-m02_ha-537012.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 cp ha-537012-m02:/home/docker/cp-test.txt ha-537012-m03:/home/docker/cp-test_ha-537012-m02_ha-537012-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012-m03 "sudo cat /home/docker/cp-test_ha-537012-m02_ha-537012-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 cp ha-537012-m02:/home/docker/cp-test.txt ha-537012-m04:/home/docker/cp-test_ha-537012-m02_ha-537012-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012-m04 "sudo cat /home/docker/cp-test_ha-537012-m02_ha-537012-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 cp testdata/cp-test.txt ha-537012-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 cp ha-537012-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile691916521/001/cp-test_ha-537012-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 cp ha-537012-m03:/home/docker/cp-test.txt ha-537012:/home/docker/cp-test_ha-537012-m03_ha-537012.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012 "sudo cat /home/docker/cp-test_ha-537012-m03_ha-537012.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 cp ha-537012-m03:/home/docker/cp-test.txt ha-537012-m02:/home/docker/cp-test_ha-537012-m03_ha-537012-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012-m02 "sudo cat /home/docker/cp-test_ha-537012-m03_ha-537012-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 cp ha-537012-m03:/home/docker/cp-test.txt ha-537012-m04:/home/docker/cp-test_ha-537012-m03_ha-537012-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012-m04 "sudo cat /home/docker/cp-test_ha-537012-m03_ha-537012-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 cp testdata/cp-test.txt ha-537012-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 cp ha-537012-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile691916521/001/cp-test_ha-537012-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 cp ha-537012-m04:/home/docker/cp-test.txt ha-537012:/home/docker/cp-test_ha-537012-m04_ha-537012.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012 "sudo cat /home/docker/cp-test_ha-537012-m04_ha-537012.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 cp ha-537012-m04:/home/docker/cp-test.txt ha-537012-m02:/home/docker/cp-test_ha-537012-m04_ha-537012-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012-m02 "sudo cat /home/docker/cp-test_ha-537012-m04_ha-537012-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 cp ha-537012-m04:/home/docker/cp-test.txt ha-537012-m03:/home/docker/cp-test_ha-537012-m04_ha-537012-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 ssh -n ha-537012-m03 "sudo cat /home/docker/cp-test_ha-537012-m04_ha-537012-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 node stop m02 --alsologtostderr -v 5
E1002 21:24:32.048546 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-537012 node stop m02 --alsologtostderr -v 5: (12.03004365s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-537012 status --alsologtostderr -v 5: exit status 7 (768.223385ms)

                                                
                                                
-- stdout --
	ha-537012
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-537012-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-537012-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-537012-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:24:33.443061 2842693 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:24:33.443266 2842693 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:24:33.443325 2842693 out.go:374] Setting ErrFile to fd 2...
	I1002 21:24:33.443346 2842693 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:24:33.443635 2842693 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-2783765/.minikube/bin
	I1002 21:24:33.443844 2842693 out.go:368] Setting JSON to false
	I1002 21:24:33.443873 2842693 mustload.go:65] Loading cluster: ha-537012
	I1002 21:24:33.444400 2842693 config.go:182] Loaded profile config "ha-537012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 21:24:33.444420 2842693 status.go:174] checking status of ha-537012 ...
	I1002 21:24:33.444459 2842693 notify.go:220] Checking for updates...
	I1002 21:24:33.444970 2842693 cli_runner.go:164] Run: docker container inspect ha-537012 --format={{.State.Status}}
	I1002 21:24:33.466511 2842693 status.go:371] ha-537012 host status = "Running" (err=<nil>)
	I1002 21:24:33.466533 2842693 host.go:66] Checking if "ha-537012" exists ...
	I1002 21:24:33.466856 2842693 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-537012
	I1002 21:24:33.492961 2842693 host.go:66] Checking if "ha-537012" exists ...
	I1002 21:24:33.493277 2842693 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:24:33.493331 2842693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-537012
	I1002 21:24:33.514162 2842693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36132 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/ha-537012/id_rsa Username:docker}
	I1002 21:24:33.609587 2842693 ssh_runner.go:195] Run: systemctl --version
	I1002 21:24:33.618028 2842693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:24:33.634105 2842693 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:24:33.697140 2842693 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-02 21:24:33.686744937 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:24:33.697702 2842693 kubeconfig.go:125] found "ha-537012" server: "https://192.168.49.254:8443"
	I1002 21:24:33.697747 2842693 api_server.go:166] Checking apiserver status ...
	I1002 21:24:33.697800 2842693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:24:33.711553 2842693 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1526/cgroup
	I1002 21:24:33.720528 2842693 api_server.go:182] apiserver freezer: "9:freezer:/docker/4db42a3b0bbf873f23ab79ec8212947eb162c2343732d0ec7108d9443d4884a6/kubepods/burstable/pod2dc1344f8a94eabcb45d48f2866d817a/5eace783be20e4ab403ebb5c4512f68985f6597a8d9cb2ae5e4ac05962dab4bd"
	I1002 21:24:33.720606 2842693 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4db42a3b0bbf873f23ab79ec8212947eb162c2343732d0ec7108d9443d4884a6/kubepods/burstable/pod2dc1344f8a94eabcb45d48f2866d817a/5eace783be20e4ab403ebb5c4512f68985f6597a8d9cb2ae5e4ac05962dab4bd/freezer.state
	I1002 21:24:33.728180 2842693 api_server.go:204] freezer state: "THAWED"
	I1002 21:24:33.728211 2842693 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1002 21:24:33.738130 2842693 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1002 21:24:33.738161 2842693 status.go:463] ha-537012 apiserver status = Running (err=<nil>)
	I1002 21:24:33.738173 2842693 status.go:176] ha-537012 status: &{Name:ha-537012 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:24:33.738192 2842693 status.go:174] checking status of ha-537012-m02 ...
	I1002 21:24:33.738506 2842693 cli_runner.go:164] Run: docker container inspect ha-537012-m02 --format={{.State.Status}}
	I1002 21:24:33.756356 2842693 status.go:371] ha-537012-m02 host status = "Stopped" (err=<nil>)
	I1002 21:24:33.756380 2842693 status.go:384] host is not running, skipping remaining checks
	I1002 21:24:33.756387 2842693 status.go:176] ha-537012-m02 status: &{Name:ha-537012-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:24:33.756407 2842693 status.go:174] checking status of ha-537012-m03 ...
	I1002 21:24:33.756721 2842693 cli_runner.go:164] Run: docker container inspect ha-537012-m03 --format={{.State.Status}}
	I1002 21:24:33.774780 2842693 status.go:371] ha-537012-m03 host status = "Running" (err=<nil>)
	I1002 21:24:33.774805 2842693 host.go:66] Checking if "ha-537012-m03" exists ...
	I1002 21:24:33.775123 2842693 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-537012-m03
	I1002 21:24:33.802421 2842693 host.go:66] Checking if "ha-537012-m03" exists ...
	I1002 21:24:33.802737 2842693 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:24:33.802777 2842693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-537012-m03
	I1002 21:24:33.821718 2842693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36142 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/ha-537012-m03/id_rsa Username:docker}
	I1002 21:24:33.924982 2842693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:24:33.938244 2842693 kubeconfig.go:125] found "ha-537012" server: "https://192.168.49.254:8443"
	I1002 21:24:33.938316 2842693 api_server.go:166] Checking apiserver status ...
	I1002 21:24:33.938387 2842693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:24:33.950760 2842693 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1412/cgroup
	I1002 21:24:33.959061 2842693 api_server.go:182] apiserver freezer: "9:freezer:/docker/dd557e7daaee00342c370b5f4c39c258f4e60bca6f5ca8d8be76dc90c51a4b5e/kubepods/burstable/poddd63b955f17ce7cca3097e7c60f157fa/b88c74f863d8d1ad17a0dda2ccf367bc5432d3df060694b89d0ea17543368307"
	I1002 21:24:33.959148 2842693 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/dd557e7daaee00342c370b5f4c39c258f4e60bca6f5ca8d8be76dc90c51a4b5e/kubepods/burstable/poddd63b955f17ce7cca3097e7c60f157fa/b88c74f863d8d1ad17a0dda2ccf367bc5432d3df060694b89d0ea17543368307/freezer.state
	I1002 21:24:33.966900 2842693 api_server.go:204] freezer state: "THAWED"
	I1002 21:24:33.966932 2842693 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1002 21:24:33.975703 2842693 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1002 21:24:33.975734 2842693 status.go:463] ha-537012-m03 apiserver status = Running (err=<nil>)
	I1002 21:24:33.975744 2842693 status.go:176] ha-537012-m03 status: &{Name:ha-537012-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:24:33.975763 2842693 status.go:174] checking status of ha-537012-m04 ...
	I1002 21:24:33.976076 2842693 cli_runner.go:164] Run: docker container inspect ha-537012-m04 --format={{.State.Status}}
	I1002 21:24:33.997114 2842693 status.go:371] ha-537012-m04 host status = "Running" (err=<nil>)
	I1002 21:24:33.997140 2842693 host.go:66] Checking if "ha-537012-m04" exists ...
	I1002 21:24:33.997437 2842693 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-537012-m04
	I1002 21:24:34.018611 2842693 host.go:66] Checking if "ha-537012-m04" exists ...
	I1002 21:24:34.018972 2842693 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:24:34.019030 2842693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-537012-m04
	I1002 21:24:34.039664 2842693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36147 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/ha-537012-m04/id_rsa Username:docker}
	I1002 21:24:34.140753 2842693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:24:34.154331 2842693 status.go:176] ha-537012-m04 status: &{Name:ha-537012-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-537012 node start m02 --alsologtostderr -v 5: (12.637621683s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-537012 status --alsologtostderr -v 5: (1.376937549s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.531855291s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-537012 stop --alsologtostderr -v 5: (36.943207351s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 start --wait true --alsologtostderr -v 5
E1002 21:25:53.970217 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:26:04.104959 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-537012 start --wait true --alsologtostderr -v 5: (1m1.410333524s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-537012 node delete m03 --alsologtostderr -v 5: (9.882736637s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-537012 stop --alsologtostderr -v 5: (35.832043692s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-537012 status --alsologtostderr -v 5: exit status 7 (120.692722ms)

                                                
                                                
-- stdout --
	ha-537012
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-537012-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-537012-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:27:16.706482 2857662 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:27:16.706604 2857662 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:27:16.706616 2857662 out.go:374] Setting ErrFile to fd 2...
	I1002 21:27:16.706622 2857662 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:27:16.706958 2857662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-2783765/.minikube/bin
	I1002 21:27:16.707180 2857662 out.go:368] Setting JSON to false
	I1002 21:27:16.707212 2857662 mustload.go:65] Loading cluster: ha-537012
	I1002 21:27:16.707929 2857662 config.go:182] Loaded profile config "ha-537012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 21:27:16.707948 2857662 status.go:174] checking status of ha-537012 ...
	I1002 21:27:16.708662 2857662 cli_runner.go:164] Run: docker container inspect ha-537012 --format={{.State.Status}}
	I1002 21:27:16.709027 2857662 notify.go:220] Checking for updates...
	I1002 21:27:16.730428 2857662 status.go:371] ha-537012 host status = "Stopped" (err=<nil>)
	I1002 21:27:16.730462 2857662 status.go:384] host is not running, skipping remaining checks
	I1002 21:27:16.730469 2857662 status.go:176] ha-537012 status: &{Name:ha-537012 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:27:16.730502 2857662 status.go:174] checking status of ha-537012-m02 ...
	I1002 21:27:16.730815 2857662 cli_runner.go:164] Run: docker container inspect ha-537012-m02 --format={{.State.Status}}
	I1002 21:27:16.759662 2857662 status.go:371] ha-537012-m02 host status = "Stopped" (err=<nil>)
	I1002 21:27:16.759686 2857662 status.go:384] host is not running, skipping remaining checks
	I1002 21:27:16.759695 2857662 status.go:176] ha-537012-m02 status: &{Name:ha-537012-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:27:16.759713 2857662 status.go:174] checking status of ha-537012-m04 ...
	I1002 21:27:16.760042 2857662 cli_runner.go:164] Run: docker container inspect ha-537012-m04 --format={{.State.Status}}
	I1002 21:27:16.778455 2857662 status.go:371] ha-537012-m04 host status = "Stopped" (err=<nil>)
	I1002 21:27:16.778478 2857662 status.go:384] host is not running, skipping remaining checks
	I1002 21:27:16.778486 2857662 status.go:176] ha-537012-m04 status: &{Name:ha-537012-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (61.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1002 21:28:10.105878 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-537012 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m0.758973823s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (61.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (82.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 node add --control-plane --alsologtostderr -v 5
E1002 21:28:37.811494 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-537012 node add --control-plane --alsologtostderr -v 5: (1m21.332949363s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-537012 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-537012 status --alsologtostderr -v 5: (1.093831194s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (82.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.065275506s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.07s)

                                                
                                    
x
+
TestJSONOutput/start/Command (50.71s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-755175 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-755175 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (50.706236478s)
--- PASS: TestJSONOutput/start/Command (50.71s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-755175 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-755175 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.83s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-755175 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-755175 --output=json --user=testUser: (5.828457504s)
--- PASS: TestJSONOutput/stop/Command (5.83s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-327730 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-327730 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (97.660542ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0f71a89e-5747-4bdc-aeaf-8622a700fda1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-327730] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"cac3fed3-1756-487c-a34d-7b7de25abf7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21682"}}
	{"specversion":"1.0","id":"7790eff3-b015-4de6-84f2-435746a9255a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2ef7dfa9-26de-466e-81c6-8d3c6cb6c19d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21682-2783765/kubeconfig"}}
	{"specversion":"1.0","id":"902d6c29-e27c-4a94-b4bf-48029daa6215","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-2783765/.minikube"}}
	{"specversion":"1.0","id":"80836f68-50ce-4802-a34d-4f5ef92da2e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"bb3e753a-181e-4f69-a9cc-cae9c242f170","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d10dfa0b-81c5-420b-8735-a56f76d50dd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-327730" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-327730
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (52.48s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-480622 --network=
E1002 21:31:04.105069 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-480622 --network=: (50.221694516s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-480622" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-480622
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-480622: (2.223566554s)
--- PASS: TestKicCustomNetwork/create_custom_network (52.48s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.46s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-226692 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-226692 --network=bridge: (35.483083178s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-226692" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-226692
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-226692: (1.945764419s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.46s)

                                                
                                    
x
+
TestKicExistingNetwork (35.84s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1002 21:32:23.307142 2785630 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1002 21:32:23.323946 2785630 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1002 21:32:23.324031 2785630 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1002 21:32:23.324048 2785630 cli_runner.go:164] Run: docker network inspect existing-network
W1002 21:32:23.339610 2785630 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1002 21:32:23.339639 2785630 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1002 21:32:23.339659 2785630 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1002 21:32:23.339785 2785630 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1002 21:32:23.356491 2785630 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-eb731828eccc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:ea:75:ed:15:08} reservation:<nil>}
I1002 21:32:23.356810 2785630 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40004995a0}
I1002 21:32:23.356837 2785630 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1002 21:32:23.356889 2785630 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1002 21:32:23.413154 2785630 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-174983 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-174983 --network=existing-network: (33.686185538s)
helpers_test.go:175: Cleaning up "existing-network-174983" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-174983
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-174983: (2.018489769s)
I1002 21:32:59.134949 2785630 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.84s)

                                                
                                    
x
+
TestKicCustomSubnet (38.99s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-137762 --subnet=192.168.60.0/24
E1002 21:33:10.108029 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-137762 --subnet=192.168.60.0/24: (36.815617019s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-137762 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-137762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-137762
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-137762: (2.142769354s)
--- PASS: TestKicCustomSubnet (38.99s)

                                                
                                    
x
+
TestKicStaticIP (37.15s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-826342 --static-ip=192.168.200.200
E1002 21:34:07.172332 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-826342 --static-ip=192.168.200.200: (34.801001178s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-826342 ip
helpers_test.go:175: Cleaning up "static-ip-826342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-826342
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-826342: (2.189684014s)
--- PASS: TestKicStaticIP (37.15s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (73.2s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-430402 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-430402 --driver=docker  --container-runtime=containerd: (33.627951689s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-433340 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-433340 --driver=docker  --container-runtime=containerd: (34.079209814s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-430402
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-433340
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-433340" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-433340
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-433340: (2.064311739s)
helpers_test.go:175: Cleaning up "first-430402" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-430402
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-430402: (1.970862756s)
--- PASS: TestMinikubeProfile (73.20s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.33s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-391741 --memory=3072 --mount-string /tmp/TestMountStartserial3792492780/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-391741 --memory=3072 --mount-string /tmp/TestMountStartserial3792492780/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.333427958s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-391741 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.84s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-393775 --memory=3072 --mount-string /tmp/TestMountStartserial3792492780/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-393775 --memory=3072 --mount-string /tmp/TestMountStartserial3792492780/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.835986902s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-393775 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-391741 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-391741 --alsologtostderr -v=5: (1.627198999s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-393775 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-393775
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-393775: (1.218591808s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.61s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-393775
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-393775: (6.614311331s)
--- PASS: TestMountStart/serial/RestartStopped (7.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-393775 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (109.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-572278 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1002 21:36:04.105250 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-572278 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m48.733348328s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (109.25s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-572278 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-572278 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-572278 -- rollout status deployment/busybox: (3.774927948s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-572278 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-572278 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-572278 -- exec busybox-7b57f96db7-2wtg4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-572278 -- exec busybox-7b57f96db7-zr8qf -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-572278 -- exec busybox-7b57f96db7-2wtg4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-572278 -- exec busybox-7b57f96db7-zr8qf -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-572278 -- exec busybox-7b57f96db7-2wtg4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-572278 -- exec busybox-7b57f96db7-zr8qf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.62s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-572278 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-572278 -- exec busybox-7b57f96db7-2wtg4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-572278 -- exec busybox-7b57f96db7-2wtg4 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-572278 -- exec busybox-7b57f96db7-zr8qf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-572278 -- exec busybox-7b57f96db7-zr8qf -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (29.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-572278 -v=5 --alsologtostderr
E1002 21:38:10.105156 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-572278 -v=5 --alsologtostderr: (28.307207008s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (29.01s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-572278 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 cp testdata/cp-test.txt multinode-572278:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 ssh -n multinode-572278 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 cp multinode-572278:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3326992147/001/cp-test_multinode-572278.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 ssh -n multinode-572278 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 cp multinode-572278:/home/docker/cp-test.txt multinode-572278-m02:/home/docker/cp-test_multinode-572278_multinode-572278-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 ssh -n multinode-572278 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 ssh -n multinode-572278-m02 "sudo cat /home/docker/cp-test_multinode-572278_multinode-572278-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 cp multinode-572278:/home/docker/cp-test.txt multinode-572278-m03:/home/docker/cp-test_multinode-572278_multinode-572278-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 ssh -n multinode-572278 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 ssh -n multinode-572278-m03 "sudo cat /home/docker/cp-test_multinode-572278_multinode-572278-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 cp testdata/cp-test.txt multinode-572278-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 ssh -n multinode-572278-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 cp multinode-572278-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3326992147/001/cp-test_multinode-572278-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 ssh -n multinode-572278-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 cp multinode-572278-m02:/home/docker/cp-test.txt multinode-572278:/home/docker/cp-test_multinode-572278-m02_multinode-572278.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 ssh -n multinode-572278-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 ssh -n multinode-572278 "sudo cat /home/docker/cp-test_multinode-572278-m02_multinode-572278.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 cp multinode-572278-m02:/home/docker/cp-test.txt multinode-572278-m03:/home/docker/cp-test_multinode-572278-m02_multinode-572278-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 ssh -n multinode-572278-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 ssh -n multinode-572278-m03 "sudo cat /home/docker/cp-test_multinode-572278-m02_multinode-572278-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 cp testdata/cp-test.txt multinode-572278-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 ssh -n multinode-572278-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 cp multinode-572278-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3326992147/001/cp-test_multinode-572278-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 ssh -n multinode-572278-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 cp multinode-572278-m03:/home/docker/cp-test.txt multinode-572278:/home/docker/cp-test_multinode-572278-m03_multinode-572278.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 ssh -n multinode-572278-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 ssh -n multinode-572278 "sudo cat /home/docker/cp-test_multinode-572278-m03_multinode-572278.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 cp multinode-572278-m03:/home/docker/cp-test.txt multinode-572278-m02:/home/docker/cp-test_multinode-572278-m03_multinode-572278-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 ssh -n multinode-572278-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 ssh -n multinode-572278-m02 "sudo cat /home/docker/cp-test_multinode-572278-m03_multinode-572278-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.32s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-572278 node stop m03: (1.228858544s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-572278 status: exit status 7 (519.00151ms)

                                                
                                                
-- stdout --
	multinode-572278
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-572278-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-572278-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-572278 status --alsologtostderr: exit status 7 (512.723753ms)

                                                
                                                
-- stdout --
	multinode-572278
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-572278-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-572278-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:38:35.955484 2911090 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:38:35.955595 2911090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:38:35.955606 2911090 out.go:374] Setting ErrFile to fd 2...
	I1002 21:38:35.955611 2911090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:38:35.955879 2911090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-2783765/.minikube/bin
	I1002 21:38:35.956062 2911090 out.go:368] Setting JSON to false
	I1002 21:38:35.956103 2911090 mustload.go:65] Loading cluster: multinode-572278
	I1002 21:38:35.956173 2911090 notify.go:220] Checking for updates...
	I1002 21:38:35.957007 2911090 config.go:182] Loaded profile config "multinode-572278": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 21:38:35.957030 2911090 status.go:174] checking status of multinode-572278 ...
	I1002 21:38:35.957533 2911090 cli_runner.go:164] Run: docker container inspect multinode-572278 --format={{.State.Status}}
	I1002 21:38:35.975770 2911090 status.go:371] multinode-572278 host status = "Running" (err=<nil>)
	I1002 21:38:35.975795 2911090 host.go:66] Checking if "multinode-572278" exists ...
	I1002 21:38:35.976070 2911090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-572278
	I1002 21:38:35.997286 2911090 host.go:66] Checking if "multinode-572278" exists ...
	I1002 21:38:35.997594 2911090 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:38:35.997645 2911090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-572278
	I1002 21:38:36.023106 2911090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36253 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/multinode-572278/id_rsa Username:docker}
	I1002 21:38:36.116903 2911090 ssh_runner.go:195] Run: systemctl --version
	I1002 21:38:36.124498 2911090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:38:36.138376 2911090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:38:36.195994 2911090 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 21:38:36.184885068 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:38:36.196566 2911090 kubeconfig.go:125] found "multinode-572278" server: "https://192.168.67.2:8443"
	I1002 21:38:36.196604 2911090 api_server.go:166] Checking apiserver status ...
	I1002 21:38:36.196651 2911090 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:38:36.209082 2911090 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1367/cgroup
	I1002 21:38:36.217240 2911090 api_server.go:182] apiserver freezer: "9:freezer:/docker/a2dbf2c64252c4d84e451a3e9ef6223050a5796f3caa9e46dcd1c8e4d56c5d4d/kubepods/burstable/podfa4f1d4d3b1bfebf34ac0c8a74e54f1b/1c004afab932d70a815b06f0c14942f67f36290f99533ab146afb63e78006b51"
	I1002 21:38:36.217316 2911090 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a2dbf2c64252c4d84e451a3e9ef6223050a5796f3caa9e46dcd1c8e4d56c5d4d/kubepods/burstable/podfa4f1d4d3b1bfebf34ac0c8a74e54f1b/1c004afab932d70a815b06f0c14942f67f36290f99533ab146afb63e78006b51/freezer.state
	I1002 21:38:36.225117 2911090 api_server.go:204] freezer state: "THAWED"
	I1002 21:38:36.225145 2911090 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 21:38:36.233244 2911090 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1002 21:38:36.233273 2911090 status.go:463] multinode-572278 apiserver status = Running (err=<nil>)
	I1002 21:38:36.233285 2911090 status.go:176] multinode-572278 status: &{Name:multinode-572278 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:38:36.233303 2911090 status.go:174] checking status of multinode-572278-m02 ...
	I1002 21:38:36.233627 2911090 cli_runner.go:164] Run: docker container inspect multinode-572278-m02 --format={{.State.Status}}
	I1002 21:38:36.251652 2911090 status.go:371] multinode-572278-m02 host status = "Running" (err=<nil>)
	I1002 21:38:36.251678 2911090 host.go:66] Checking if "multinode-572278-m02" exists ...
	I1002 21:38:36.251979 2911090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-572278-m02
	I1002 21:38:36.268791 2911090 host.go:66] Checking if "multinode-572278-m02" exists ...
	I1002 21:38:36.269106 2911090 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:38:36.269157 2911090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-572278-m02
	I1002 21:38:36.286396 2911090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36258 SSHKeyPath:/home/jenkins/minikube-integration/21682-2783765/.minikube/machines/multinode-572278-m02/id_rsa Username:docker}
	I1002 21:38:36.380587 2911090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:38:36.393154 2911090 status.go:176] multinode-572278-m02 status: &{Name:multinode-572278-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:38:36.393189 2911090 status.go:174] checking status of multinode-572278-m03 ...
	I1002 21:38:36.393489 2911090 cli_runner.go:164] Run: docker container inspect multinode-572278-m03 --format={{.State.Status}}
	I1002 21:38:36.411979 2911090 status.go:371] multinode-572278-m03 host status = "Stopped" (err=<nil>)
	I1002 21:38:36.412001 2911090 status.go:384] host is not running, skipping remaining checks
	I1002 21:38:36.412008 2911090 status.go:176] multinode-572278-m03 status: &{Name:multinode-572278-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-572278 node start m03 -v=5 --alsologtostderr: (6.77186887s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.55s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-572278
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-572278
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-572278: (24.896983605s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-572278 --wait=true -v=5 --alsologtostderr
E1002 21:39:33.173497 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-572278 --wait=true -v=5 --alsologtostderr: (54.498822178s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-572278
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.52s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-572278 node delete m03: (4.909815892s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.60s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-572278 stop: (23.63580254s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-572278 status: exit status 7 (103.82337ms)

                                                
                                                
-- stdout --
	multinode-572278
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-572278-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-572278 status --alsologtostderr: exit status 7 (110.928923ms)

                                                
                                                
-- stdout --
	multinode-572278
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-572278-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:40:32.886501 2919908 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:40:32.886655 2919908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:40:32.886668 2919908 out.go:374] Setting ErrFile to fd 2...
	I1002 21:40:32.886696 2919908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:40:32.886973 2919908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-2783765/.minikube/bin
	I1002 21:40:32.887192 2919908 out.go:368] Setting JSON to false
	I1002 21:40:32.887250 2919908 mustload.go:65] Loading cluster: multinode-572278
	I1002 21:40:32.887324 2919908 notify.go:220] Checking for updates...
	I1002 21:40:32.888311 2919908 config.go:182] Loaded profile config "multinode-572278": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 21:40:32.888334 2919908 status.go:174] checking status of multinode-572278 ...
	I1002 21:40:32.888925 2919908 cli_runner.go:164] Run: docker container inspect multinode-572278 --format={{.State.Status}}
	I1002 21:40:32.907690 2919908 status.go:371] multinode-572278 host status = "Stopped" (err=<nil>)
	I1002 21:40:32.907715 2919908 status.go:384] host is not running, skipping remaining checks
	I1002 21:40:32.907722 2919908 status.go:176] multinode-572278 status: &{Name:multinode-572278 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:40:32.907747 2919908 status.go:174] checking status of multinode-572278-m02 ...
	I1002 21:40:32.908049 2919908 cli_runner.go:164] Run: docker container inspect multinode-572278-m02 --format={{.State.Status}}
	I1002 21:40:32.939392 2919908 status.go:371] multinode-572278-m02 host status = "Stopped" (err=<nil>)
	I1002 21:40:32.939412 2919908 status.go:384] host is not running, skipping remaining checks
	I1002 21:40:32.939419 2919908 status.go:176] multinode-572278-m02 status: &{Name:multinode-572278-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-572278 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1002 21:41:04.105206 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-572278 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (51.808460223s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-572278 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.49s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-572278
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-572278-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-572278-m02 --driver=docker  --container-runtime=containerd: exit status 14 (95.991229ms)

                                                
                                                
-- stdout --
	* [multinode-572278-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-2783765/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-2783765/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-572278-m02' is duplicated with machine name 'multinode-572278-m02' in profile 'multinode-572278'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-572278-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-572278-m03 --driver=docker  --container-runtime=containerd: (34.901510248s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-572278
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-572278: exit status 80 (369.408289ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-572278 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-572278-m03 already exists in multinode-572278-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-572278-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-572278-m03: (1.949284498s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.37s)

                                                
                                    
x
+
TestPreload (157.92s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-922670 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-922670 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (58.517545911s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-922670 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-922670 image pull gcr.io/k8s-minikube/busybox: (2.413277594s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-922670
E1002 21:43:10.105540 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-922670: (5.780043349s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-922670 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-922670 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (1m28.672310234s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-922670 image list
helpers_test.go:175: Cleaning up "test-preload-922670" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-922670
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-922670: (2.295757013s)
--- PASS: TestPreload (157.92s)

                                                
                                    
x
+
TestScheduledStopUnix (111.48s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-458680 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-458680 --memory=3072 --driver=docker  --container-runtime=containerd: (34.443345832s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-458680 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-458680 -n scheduled-stop-458680
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-458680 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1002 21:45:19.708846 2785630 retry.go:31] will retry after 126.801µs: open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/scheduled-stop-458680/pid: no such file or directory
I1002 21:45:19.711416 2785630 retry.go:31] will retry after 169.88µs: open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/scheduled-stop-458680/pid: no such file or directory
I1002 21:45:19.712556 2785630 retry.go:31] will retry after 156.479µs: open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/scheduled-stop-458680/pid: no such file or directory
I1002 21:45:19.713686 2785630 retry.go:31] will retry after 217.91µs: open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/scheduled-stop-458680/pid: no such file or directory
I1002 21:45:19.714812 2785630 retry.go:31] will retry after 388.522µs: open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/scheduled-stop-458680/pid: no such file or directory
I1002 21:45:19.715940 2785630 retry.go:31] will retry after 1.108785ms: open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/scheduled-stop-458680/pid: no such file or directory
I1002 21:45:19.718131 2785630 retry.go:31] will retry after 1.167416ms: open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/scheduled-stop-458680/pid: no such file or directory
I1002 21:45:19.720322 2785630 retry.go:31] will retry after 1.703959ms: open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/scheduled-stop-458680/pid: no such file or directory
I1002 21:45:19.722490 2785630 retry.go:31] will retry after 3.194432ms: open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/scheduled-stop-458680/pid: no such file or directory
I1002 21:45:19.726684 2785630 retry.go:31] will retry after 2.104268ms: open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/scheduled-stop-458680/pid: no such file or directory
I1002 21:45:19.729925 2785630 retry.go:31] will retry after 6.583324ms: open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/scheduled-stop-458680/pid: no such file or directory
I1002 21:45:19.737514 2785630 retry.go:31] will retry after 7.699627ms: open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/scheduled-stop-458680/pid: no such file or directory
I1002 21:45:19.745766 2785630 retry.go:31] will retry after 15.373984ms: open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/scheduled-stop-458680/pid: no such file or directory
I1002 21:45:19.762200 2785630 retry.go:31] will retry after 17.297431ms: open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/scheduled-stop-458680/pid: no such file or directory
I1002 21:45:19.780465 2785630 retry.go:31] will retry after 28.615289ms: open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/scheduled-stop-458680/pid: no such file or directory
I1002 21:45:19.809714 2785630 retry.go:31] will retry after 55.501299ms: open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/scheduled-stop-458680/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-458680 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-458680 -n scheduled-stop-458680
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-458680
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-458680 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1002 21:46:04.104642 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-458680
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-458680: exit status 7 (76.45664ms)

                                                
                                                
-- stdout --
	scheduled-stop-458680
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-458680 -n scheduled-stop-458680
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-458680 -n scheduled-stop-458680: exit status 7 (71.782345ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-458680" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-458680
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-458680: (5.26863204s)
--- PASS: TestScheduledStopUnix (111.48s)

                                                
                                    
x
+
TestInsufficientStorage (11.24s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-576268 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-576268 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.647755763s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8854cec3-522b-418c-9dec-db7968ff11a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-576268] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d2c33bc4-24b1-4a1d-99f8-700f5cf47183","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21682"}}
	{"specversion":"1.0","id":"ce2015c9-9a58-404d-b509-8e62244fe4c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a42e5629-4c66-46ff-b137-818f80c71fa1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21682-2783765/kubeconfig"}}
	{"specversion":"1.0","id":"19b7ab5d-2af2-4d75-bd4c-e95a0136da43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-2783765/.minikube"}}
	{"specversion":"1.0","id":"ae0ec09c-aef8-48a7-b899-cfa9dc227275","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"87967db5-0930-4104-b48d-e2436bebc043","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d40f4e23-8e8f-4f5c-8ce9-9b660dd43e98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"637d5b7e-c8c9-47a4-bd27-d3822f48ff1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"9211af07-f394-4db8-a246-147084c2bb71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"bacc4f12-4459-49b8-8aee-44f1fce7cc41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"9141c083-3e68-4690-a226-98b3578309fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-576268\" primary control-plane node in \"insufficient-storage-576268\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"9740678e-6ed3-4fb0-b2ee-1e48db4ab3b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1759382731-21643 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"6ea3feca-0e7e-456c-a9dc-835b01e6e289","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"1ff5d59c-093e-48bf-8375-2aa1d134dc46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-576268 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-576268 --output=json --layout=cluster: exit status 7 (353.768998ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-576268","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-576268","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:46:45.206521 2938640 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-576268" does not appear in /home/jenkins/minikube-integration/21682-2783765/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-576268 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-576268 --output=json --layout=cluster: exit status 7 (353.466126ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-576268","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-576268","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:46:45.561784 2938706 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-576268" does not appear in /home/jenkins/minikube-integration/21682-2783765/kubeconfig
	E1002 21:46:45.571718 2938706 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/insufficient-storage-576268/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-576268" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-576268
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-576268: (1.885759128s)
--- PASS: TestInsufficientStorage (11.24s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (61.75s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.77596318 start -p running-upgrade-967055 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1002 21:50:47.173697 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:51:04.104305 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.77596318 start -p running-upgrade-967055 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (32.012747207s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-967055 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-967055 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (25.808897007s)
helpers_test.go:175: Cleaning up "running-upgrade-967055" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-967055
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-967055: (2.022076598s)
--- PASS: TestRunningBinaryUpgrade (61.75s)

                                                
                                    
x
+
TestKubernetesUpgrade (362.86s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-032522 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-032522 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (40.257425554s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-032522
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-032522: (1.228239873s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-032522 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-032522 status --format={{.Host}}: exit status 7 (70.138753ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-032522 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-032522 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5m1.130014412s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-032522 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-032522 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-032522 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (212.698539ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-032522] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-2783765/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-2783765/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-032522
	    minikube start -p kubernetes-upgrade-032522 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0325222 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-032522 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-032522 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-032522 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (17.561711464s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-032522" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-032522
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-032522: (2.234780379s)
--- PASS: TestKubernetesUpgrade (362.86s)

                                                
                                    
x
+
TestMissingContainerUpgrade (152.59s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2153927636 start -p missing-upgrade-278437 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2153927636 start -p missing-upgrade-278437 --memory=3072 --driver=docker  --container-runtime=containerd: (1m0.920836714s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-278437
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-278437
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-278437 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-278437 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m23.911975439s)
helpers_test.go:175: Cleaning up "missing-upgrade-278437" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-278437
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-278437: (2.124492327s)
--- PASS: TestMissingContainerUpgrade (152.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-318840 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-318840 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (91.504586ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-318840] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-2783765/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-2783765/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-318840 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-318840 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (40.147740181s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-318840 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (26.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-318840 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-318840 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (23.411329073s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-318840 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-318840 status -o json: exit status 2 (554.160539ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-318840","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-318840
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-318840: (2.175078529s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (26.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-318840 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-318840 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (8.786490881s)
--- PASS: TestNoKubernetes/serial/Start (8.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-318840 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-318840 "sudo systemctl is-active --quiet service kubelet": exit status 1 (276.853872ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-318840
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-318840: (1.215335886s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-318840 --driver=docker  --container-runtime=containerd
E1002 21:48:10.105469 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-318840 --driver=docker  --container-runtime=containerd: (6.957269598s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-318840 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-318840 "sudo systemctl is-active --quiet service kubelet": exit status 1 (379.785427ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (67.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1523384606 start -p stopped-upgrade-964871 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1523384606 start -p stopped-upgrade-964871 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (37.471384096s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1523384606 -p stopped-upgrade-964871 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1523384606 -p stopped-upgrade-964871 stop: (1.338549051s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-964871 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-964871 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (29.171839671s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (67.99s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-964871
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-964871: (1.840625738s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.84s)

                                                
                                    
x
+
TestPause/serial/Start (83.25s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-117480 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-117480 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m23.248517107s)
--- PASS: TestPause/serial/Start (83.25s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.17s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-117480 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-117480 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.145498124s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.17s)

                                                
                                    
x
+
TestPause/serial/Pause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-117480 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.70s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-117480 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-117480 --output=json --layout=cluster: exit status 2 (315.940496ms)

                                                
                                                
-- stdout --
	{"Name":"pause-117480","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-117480","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.85s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-117480 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.85s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.95s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-117480 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.95s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.75s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-117480 --alsologtostderr -v=5
E1002 21:53:10.105003 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-117480 --alsologtostderr -v=5: (2.751335551s)
--- PASS: TestPause/serial/DeletePaused (2.75s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.5s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-117480
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-117480: exit status 1 (18.696037ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-117480: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-960104 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-960104 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (278.123467ms)

                                                
                                                
-- stdout --
	* [false-960104] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-2783765/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-2783765/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:54:04.576790 2979634 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:54:04.576984 2979634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:54:04.577012 2979634 out.go:374] Setting ErrFile to fd 2...
	I1002 21:54:04.577032 2979634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:54:04.577334 2979634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-2783765/.minikube/bin
	I1002 21:54:04.577789 2979634 out.go:368] Setting JSON to false
	I1002 21:54:04.578811 2979634 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":63394,"bootTime":1759378651,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1002 21:54:04.578906 2979634 start.go:140] virtualization:  
	I1002 21:54:04.582758 2979634 out.go:179] * [false-960104] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:54:04.587099 2979634 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:54:04.587168 2979634 notify.go:220] Checking for updates...
	I1002 21:54:04.593729 2979634 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:54:04.596725 2979634 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-2783765/kubeconfig
	I1002 21:54:04.599725 2979634 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-2783765/.minikube
	I1002 21:54:04.602752 2979634 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:54:04.605863 2979634 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:54:04.609392 2979634 config.go:182] Loaded profile config "kubernetes-upgrade-032522": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 21:54:04.609500 2979634 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:54:04.642596 2979634 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:54:04.642817 2979634 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:54:04.762951 2979634 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 21:54:04.751862026 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:54:04.763063 2979634 docker.go:318] overlay module found
	I1002 21:54:04.766553 2979634 out.go:179] * Using the docker driver based on user configuration
	I1002 21:54:04.768876 2979634 start.go:304] selected driver: docker
	I1002 21:54:04.768897 2979634 start.go:924] validating driver "docker" against <nil>
	I1002 21:54:04.768922 2979634 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:54:04.772931 2979634 out.go:203] 
	W1002 21:54:04.776011 2979634 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1002 21:54:04.778785 2979634 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-960104 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-960104

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-960104

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-960104

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-960104

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-960104

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-960104

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-960104

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-960104

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-960104

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-960104

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-960104

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-960104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-960104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-960104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-960104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-960104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-960104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-960104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-960104" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-960104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-960104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-960104" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21682-2783765/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 02 Oct 2025 21:53:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-032522
contexts:
- context:
cluster: kubernetes-upgrade-032522
extensions:
- extension:
last-update: Thu, 02 Oct 2025 21:53:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-032522
name: kubernetes-upgrade-032522
current-context: kubernetes-upgrade-032522
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-032522
user:
client-certificate: /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/kubernetes-upgrade-032522/client.crt
client-key: /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/kubernetes-upgrade-032522/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-960104

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960104"

                                                
                                                
----------------------- debugLogs end: false-960104 [took: 4.907270071s] --------------------------------
helpers_test.go:175: Cleaning up "false-960104" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-960104
--- PASS: TestNetworkPlugins/group/false (5.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (63.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-439090 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1002 21:56:04.105005 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:56:13.177008 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-439090 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m3.728765545s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (63.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-439090 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [19e86fda-0ae1-4234-a488-081279413937] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [19e86fda-0ae1-4234-a488-081279413937] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003466395s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-439090 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-439090 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-439090 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.075787851s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-439090 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-439090 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-439090 --alsologtostderr -v=3: (11.974519856s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-439090 -n old-k8s-version-439090
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-439090 -n old-k8s-version-439090: exit status 7 (74.620789ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-439090 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (55.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-439090 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-439090 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (55.004285598s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-439090 -n old-k8s-version-439090
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (55.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-9kl64" [80f4f409-60c9-40eb-afa3-6b8e5e7be29c] Running
E1002 21:58:10.105922 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003295973s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-9kl64" [80f4f409-60c9-40eb-afa3-6b8e5e7be29c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.019505602s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-439090 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (75.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-479357 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-479357 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m15.68372203s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (75.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-439090 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-439090 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-439090 -n old-k8s-version-439090
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-439090 -n old-k8s-version-439090: exit status 2 (325.465255ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-439090 -n old-k8s-version-439090
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-439090 -n old-k8s-version-439090: exit status 2 (362.612158ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-439090 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-439090 -n old-k8s-version-439090
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-439090 -n old-k8s-version-439090
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (91.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-031787 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-031787 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m31.376274883s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (91.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-479357 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d243fde4-4f51-4bf0-b7c8-e8cd0d260d37] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d243fde4-4f51-4bf0-b7c8-e8cd0d260d37] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004108995s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-479357 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-479357 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-479357 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-479357 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-479357 --alsologtostderr -v=3: (12.018711011s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-479357 -n no-preload-479357
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-479357 -n no-preload-479357: exit status 7 (78.784763ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-479357 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (53.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-479357 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-479357 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (52.777475661s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-479357 -n no-preload-479357
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (53.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-031787 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3d427fac-f1c0-4b5b-bfcc-c57bdc070ff5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3d427fac-f1c0-4b5b-bfcc-c57bdc070ff5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003826834s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-031787 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-031787 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-031787 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.670148973s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-031787 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-031787 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-031787 --alsologtostderr -v=3: (12.835757416s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-031787 -n embed-certs-031787
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-031787 -n embed-certs-031787: exit status 7 (84.733306ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-031787 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-031787 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-031787 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (48.732912937s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-031787 -n embed-certs-031787
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-w9dpt" [f565d14a-17e0-47b6-8793-4658965d77b8] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004137728s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-w9dpt" [f565d14a-17e0-47b6-8793-4658965d77b8] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003499347s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-479357 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-479357 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-479357 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-479357 -n no-preload-479357
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-479357 -n no-preload-479357: exit status 2 (335.844915ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-479357 -n no-preload-479357
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-479357 -n no-preload-479357: exit status 2 (360.308307ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-479357 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-479357 -n no-preload-479357
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-479357 -n no-preload-479357
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-250730 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1002 22:01:04.105231 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/addons-774992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-250730 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m27.109121981s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6f6zd" [5f558fe1-0329-40d5-b0a2-4c4bca771677] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003896015s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6f6zd" [5f558fe1-0329-40d5-b0a2-4c4bca771677] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004053536s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-031787 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-031787 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-031787 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-031787 -n embed-certs-031787
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-031787 -n embed-certs-031787: exit status 2 (397.075697ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-031787 -n embed-certs-031787
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-031787 -n embed-certs-031787: exit status 2 (415.921218ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-031787 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-031787 -n embed-certs-031787
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-031787 -n embed-certs-031787
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-222964 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1002 22:01:46.918345 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/old-k8s-version-439090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:01:46.924614 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/old-k8s-version-439090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:01:46.935877 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/old-k8s-version-439090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:01:46.957795 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/old-k8s-version-439090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:01:46.999086 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/old-k8s-version-439090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:01:47.080405 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/old-k8s-version-439090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:01:47.241837 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/old-k8s-version-439090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:01:47.563836 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/old-k8s-version-439090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:01:48.205186 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/old-k8s-version-439090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:01:49.487040 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/old-k8s-version-439090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:01:52.048328 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/old-k8s-version-439090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:01:57.170240 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/old-k8s-version-439090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:02:07.411773 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/old-k8s-version-439090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-222964 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (42.702036751s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-222964 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-222964 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.010688902s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-222964 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-222964 --alsologtostderr -v=3: (1.273464593s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-222964 -n newest-cni-222964
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-222964 -n newest-cni-222964: exit status 7 (76.314811ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-222964 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-222964 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1002 22:02:27.893763 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/old-k8s-version-439090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-222964 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (18.152281485s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-222964 -n newest-cni-222964
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-250730 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9be55f48-56a7-4eaf-9399-e666e39ca84d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9be55f48-56a7-4eaf-9399-e666e39ca84d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003800924s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-250730 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-222964 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-222964 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-222964 -n newest-cni-222964
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-222964 -n newest-cni-222964: exit status 2 (385.818585ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-222964 -n newest-cni-222964
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-222964 -n newest-cni-222964: exit status 2 (357.049084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-222964 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-222964 -n newest-cni-222964
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-222964 -n newest-cni-222964
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (86.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-960104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-960104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m26.751485394s)
--- PASS: TestNetworkPlugins/group/auto/Start (86.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-250730 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-250730 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.57150473s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-250730 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-250730 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-250730 --alsologtostderr -v=3: (12.355313442s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-250730 -n default-k8s-diff-port-250730
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-250730 -n default-k8s-diff-port-250730: exit status 7 (101.57932ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-250730 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-250730 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1002 22:03:08.855437 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/old-k8s-version-439090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:03:10.105296 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/functional-029371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-250730 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (55.297714723s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-250730 -n default-k8s-diff-port-250730
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jw95j" [0ce54518-ddb7-4d09-8f73-c32d2d8bd327] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002988075s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jw95j" [0ce54518-ddb7-4d09-8f73-c32d2d8bd327] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00405481s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-250730 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-250730 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-250730 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-250730 -n default-k8s-diff-port-250730
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-250730 -n default-k8s-diff-port-250730: exit status 2 (350.422687ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-250730 -n default-k8s-diff-port-250730
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-250730 -n default-k8s-diff-port-250730: exit status 2 (352.765268ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-250730 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-250730 -n default-k8s-diff-port-250730
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-250730 -n default-k8s-diff-port-250730
I1002 22:04:05.030815 2785630 config.go:182] Loaded profile config "auto-960104": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.66s)
E1002 22:09:25.996299 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/auto-960104/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:09:29.183423 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/no-preload-479357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-960104 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-960104 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jm8t9" [26e854e4-2605-4de8-973b-5b0e19061c7e] Pending
helpers_test.go:352: "netcat-cd4db9dbf-jm8t9" [26e854e4-2605-4de8-973b-5b0e19061c7e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003473545s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (59.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-960104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-960104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (59.73087935s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (59.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-960104 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-960104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-960104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (56.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-960104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E1002 22:04:49.688478 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/no-preload-479357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-960104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (56.322361454s)
--- PASS: TestNetworkPlugins/group/calico/Start (56.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-r4qtn" [75954539-7792-4628-89b5-6f483b74ce89] Running
E1002 22:05:10.170486 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/no-preload-479357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005755038s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-960104 "pgrep -a kubelet"
I1002 22:05:15.229935 2785630 config.go:182] Loaded profile config "kindnet-960104": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-960104 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vg48b" [4b8c296e-612b-4312-b2f1-4499849ec4a7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vg48b" [4b8c296e-612b-4312-b2f1-4499849ec4a7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.00603572s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-960104 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-960104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-960104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-fz89m" [39979720-1eb4-4a04-a760-f22062617d88] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005085366s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-960104 "pgrep -a kubelet"
I1002 22:05:44.913441 2785630 config.go:182] Loaded profile config "calico-960104": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-960104 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-b58cw" [478a199d-f6cd-4923-ad0c-cc306c481fc8] Pending
helpers_test.go:352: "netcat-cd4db9dbf-b58cw" [478a199d-f6cd-4923-ad0c-cc306c481fc8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-b58cw" [478a199d-f6cd-4923-ad0c-cc306c481fc8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.007242853s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-960104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-960104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m10.875191367s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-960104 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-960104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-960104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (72.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-960104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E1002 22:06:46.918247 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/old-k8s-version-439090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-960104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m12.703625042s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (72.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-960104 "pgrep -a kubelet"
I1002 22:07:03.274024 2785630 config.go:182] Loaded profile config "custom-flannel-960104": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-960104 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xqn74" [676645b5-ea52-474e-9618-40e61eb8f394] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xqn74" [676645b5-ea52-474e-9618-40e61eb8f394] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.00415675s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-960104 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-960104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-960104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1002 22:07:13.054885 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/no-preload-479357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (68.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-960104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1002 22:07:35.532202 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/default-k8s-diff-port-250730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-960104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m8.051049604s)
--- PASS: TestNetworkPlugins/group/flannel/Start (68.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-960104 "pgrep -a kubelet"
I1002 22:07:37.702566 2785630 config.go:182] Loaded profile config "enable-default-cni-960104": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-960104 replace --force -f testdata/netcat-deployment.yaml
I1002 22:07:38.013016 2785630 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tplkn" [36ec2a4a-f01c-416f-93f5-59b18ab5916b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 22:07:40.653469 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/default-k8s-diff-port-250730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-tplkn" [36ec2a4a-f01c-416f-93f5-59b18ab5916b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.006286875s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-960104 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-960104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-960104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (51.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-960104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-960104 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (51.134188897s)
--- PASS: TestNetworkPlugins/group/bridge/Start (51.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-lvkv7" [7264a131-6684-4265-b786-76eab012a166] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003515162s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-960104 "pgrep -a kubelet"
I1002 22:08:48.956476 2785630 config.go:182] Loaded profile config "flannel-960104": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-960104 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-h2ktj" [fbfe946b-d2dc-40f3-8172-706830bb70f3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 22:08:52.337746 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/default-k8s-diff-port-250730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-h2ktj" [fbfe946b-d2dc-40f3-8172-706830bb70f3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003470103s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-960104 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-960104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-960104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-960104 "pgrep -a kubelet"
E1002 22:09:06.145646 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/auto-960104/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1002 22:09:06.345787 2785630 config.go:182] Loaded profile config "bridge-960104": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-960104 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mwqmt" [56962587-83f1-45d3-bc0c-3aad15c937ff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 22:09:06.787905 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/auto-960104/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:09:08.069941 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/auto-960104/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:09:10.631860 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/auto-960104/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-mwqmt" [56962587-83f1-45d3-bc0c-3aad15c937ff] Running
E1002 22:09:15.754152 2785630 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/auto-960104/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003227384s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-960104 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-960104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-960104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.29s)

                                                
                                    

Test skip (30/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-554933 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-554933" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-554933
--- SKIP: TestDownloadOnlyKic (0.44s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-591243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-591243
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-960104 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-960104

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-960104

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-960104

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-960104

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-960104

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-960104

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-960104

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-960104

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-960104

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-960104

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-960104

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-960104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-960104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-960104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-960104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-960104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-960104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-960104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-960104" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-960104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-960104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-960104" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21682-2783765/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 02 Oct 2025 21:53:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-032522
contexts:
- context:
cluster: kubernetes-upgrade-032522
extensions:
- extension:
last-update: Thu, 02 Oct 2025 21:53:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-032522
name: kubernetes-upgrade-032522
current-context: kubernetes-upgrade-032522
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-032522
user:
client-certificate: /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/kubernetes-upgrade-032522/client.crt
client-key: /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/kubernetes-upgrade-032522/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-960104

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960104"

                                                
                                                
----------------------- debugLogs end: kubenet-960104 [took: 5.725862023s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-960104" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-960104
--- SKIP: TestNetworkPlugins/group/kubenet (5.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-960104 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-960104

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-960104

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-960104

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-960104

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-960104

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-960104

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-960104

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-960104

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-960104

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-960104

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-960104

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-960104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-960104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-960104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-960104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-960104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-960104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-960104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-960104" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-960104

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-960104

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-960104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-960104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-960104

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-960104

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-960104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-960104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-960104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-960104" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-960104" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21682-2783765/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 02 Oct 2025 21:53:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-032522
contexts:
- context:
cluster: kubernetes-upgrade-032522
extensions:
- extension:
last-update: Thu, 02 Oct 2025 21:53:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-032522
name: kubernetes-upgrade-032522
current-context: kubernetes-upgrade-032522
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-032522
user:
client-certificate: /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/kubernetes-upgrade-032522/client.crt
client-key: /home/jenkins/minikube-integration/21682-2783765/.minikube/profiles/kubernetes-upgrade-032522/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-960104

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-960104" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960104"

                                                
                                                
----------------------- debugLogs end: cilium-960104 [took: 5.764524289s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-960104" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-960104
--- SKIP: TestNetworkPlugins/group/cilium (5.99s)

                                                
                                    
Copied to clipboard