Test Report: Docker_Linux_containerd 21652

                    
                      b9467c4b05d043dd40c691e5c40c4e59f96d3adc:2025-09-29:41683
                    
                

Test fail (20/325)

x
+
TestDockerEnvContainerd (43.33s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-230733 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-230733 --driver=docker  --container-runtime=containerd: (24.441372453s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-230733"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-230733": (1.001438494s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXxKMnhD/agent.1126955" SSH_AGENT_PID="1126956" DOCKER_HOST=ssh://docker@127.0.0.1:33266 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXxKMnhD/agent.1126955" SSH_AGENT_PID="1126956" DOCKER_HOST=ssh://docker@127.0.0.1:33266 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Non-zero exit: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXxKMnhD/agent.1126955" SSH_AGENT_PID="1126956" DOCKER_HOST=ssh://docker@127.0.0.1:33266 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": exit status 1 (2.660878401s)

                                                
                                                
-- stdout --
	Sending build context to Docker daemon  2.048kB

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            BuildKit is currently disabled; enable it by removing the DOCKER_BUILDKIT=0
	            environment-variable.
	
	Error response from daemon: exit status 1

                                                
                                                
** /stderr **
docker_test.go:245: failed to build images, error: exit status 1, output:
-- stdout --
	Sending build context to Docker daemon  2.048kB

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            BuildKit is currently disabled; enable it by removing the DOCKER_BUILDKIT=0
	            environment-variable.
	
	Error response from daemon: exit status 1

                                                
                                                
** /stderr **
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXxKMnhD/agent.1126955" SSH_AGENT_PID="1126956" DOCKER_HOST=ssh://docker@127.0.0.1:33266 docker image ls"
docker_test.go:255: failed to detect image 'local/minikube-dockerenv-containerd-test' in output of docker image ls
panic.go:636: *** TestDockerEnvContainerd FAILED at 2025-09-29 12:23:15.725398051 +0000 UTC m=+360.309421233
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestDockerEnvContainerd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestDockerEnvContainerd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect dockerenv-230733
helpers_test.go:243: (dbg) docker inspect dockerenv-230733:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fdbed458c3205a89816fc5019259340c9514ee015fe5d4b5ac47ed88d80e96e1",
	        "Created": "2025-09-29T12:22:41.889254107Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1124164,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T12:22:41.918526474Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/fdbed458c3205a89816fc5019259340c9514ee015fe5d4b5ac47ed88d80e96e1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fdbed458c3205a89816fc5019259340c9514ee015fe5d4b5ac47ed88d80e96e1/hostname",
	        "HostsPath": "/var/lib/docker/containers/fdbed458c3205a89816fc5019259340c9514ee015fe5d4b5ac47ed88d80e96e1/hosts",
	        "LogPath": "/var/lib/docker/containers/fdbed458c3205a89816fc5019259340c9514ee015fe5d4b5ac47ed88d80e96e1/fdbed458c3205a89816fc5019259340c9514ee015fe5d4b5ac47ed88d80e96e1-json.log",
	        "Name": "/dockerenv-230733",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "dockerenv-230733:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "dockerenv-230733",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 8388608000,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 16777216000,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fdbed458c3205a89816fc5019259340c9514ee015fe5d4b5ac47ed88d80e96e1",
	                "LowerDir": "/var/lib/docker/overlay2/469fcfe166795b34f0d839d5502c40a9e6df8dd54c063d915a5f2db53c57d85d-init/diff:/var/lib/docker/overlay2/fbd0ff8837aea1062458ef3b6c2ff01f7caaf77470820d108a1f7ca188c98aa7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/469fcfe166795b34f0d839d5502c40a9e6df8dd54c063d915a5f2db53c57d85d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/469fcfe166795b34f0d839d5502c40a9e6df8dd54c063d915a5f2db53c57d85d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/469fcfe166795b34f0d839d5502c40a9e6df8dd54c063d915a5f2db53c57d85d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "dockerenv-230733",
	                "Source": "/var/lib/docker/volumes/dockerenv-230733/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "dockerenv-230733",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "dockerenv-230733",
	                "name.minikube.sigs.k8s.io": "dockerenv-230733",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "95b3751e2cb3350484956697dc1e843d8d24259005add54c44d83f9e43d1180a",
	            "SandboxKey": "/var/run/docker/netns/95b3751e2cb3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33266"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33267"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33270"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33268"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33269"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "dockerenv-230733": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:73:3c:bc:bb:f0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3082c50a22240c6b19257a968d8b2dd3c5ccf5e3f6ae3bcc81951394c9958e8c",
	                    "EndpointID": "532da024af43db8232fa84317d1dd6ac1732bc485b25ce672de22a577cfe3549",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "dockerenv-230733",
	                        "fdbed458c320"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p dockerenv-230733 -n dockerenv-230733
helpers_test.go:252: <<< TestDockerEnvContainerd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestDockerEnvContainerd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p dockerenv-230733 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p dockerenv-230733 logs -n 25: (1.13434514s)
helpers_test.go:260: TestDockerEnvContainerd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND   │                                                       ARGS                                                        │     PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons     │ addons-752861 addons disable nvidia-device-plugin --alsologtostderr -v=1                                          │ addons-752861    │ jenkins │ v1.37.0 │ 29 Sep 25 12:21 UTC │ 29 Sep 25 12:21 UTC │
	│ addons     │ addons-752861 addons disable cloud-spanner --alsologtostderr -v=1                                                 │ addons-752861    │ jenkins │ v1.37.0 │ 29 Sep 25 12:21 UTC │ 29 Sep 25 12:21 UTC │
	│ addons     │ addons-752861 addons disable headlamp --alsologtostderr -v=1                                                      │ addons-752861    │ jenkins │ v1.37.0 │ 29 Sep 25 12:21 UTC │ 29 Sep 25 12:21 UTC │
	│ ssh        │ addons-752861 ssh cat /opt/local-path-provisioner/pvc-ffea6f18-99c9-41ff-a6ec-8000ba91e3a7_default_test-pvc/file1 │ addons-752861    │ jenkins │ v1.37.0 │ 29 Sep 25 12:21 UTC │ 29 Sep 25 12:21 UTC │
	│ addons     │ addons-752861 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                   │ addons-752861    │ jenkins │ v1.37.0 │ 29 Sep 25 12:21 UTC │ 29 Sep 25 12:21 UTC │
	│ ip         │ addons-752861 ip                                                                                                  │ addons-752861    │ jenkins │ v1.37.0 │ 29 Sep 25 12:21 UTC │ 29 Sep 25 12:21 UTC │
	│ addons     │ addons-752861 addons disable registry --alsologtostderr -v=1                                                      │ addons-752861    │ jenkins │ v1.37.0 │ 29 Sep 25 12:21 UTC │ 29 Sep 25 12:21 UTC │
	│ addons     │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-752861                                    │ addons-752861    │ jenkins │ v1.37.0 │ 29 Sep 25 12:21 UTC │ 29 Sep 25 12:21 UTC │
	│ addons     │ addons-752861 addons disable registry-creds --alsologtostderr -v=1                                                │ addons-752861    │ jenkins │ v1.37.0 │ 29 Sep 25 12:21 UTC │ 29 Sep 25 12:21 UTC │
	│ addons     │ addons-752861 addons disable inspektor-gadget --alsologtostderr -v=1                                              │ addons-752861    │ jenkins │ v1.37.0 │ 29 Sep 25 12:21 UTC │ 29 Sep 25 12:21 UTC │
	│ addons     │ addons-752861 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                         │ addons-752861    │ jenkins │ v1.37.0 │ 29 Sep 25 12:21 UTC │ 29 Sep 25 12:21 UTC │
	│ ssh        │ addons-752861 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                          │ addons-752861    │ jenkins │ v1.37.0 │ 29 Sep 25 12:21 UTC │ 29 Sep 25 12:21 UTC │
	│ ip         │ addons-752861 ip                                                                                                  │ addons-752861    │ jenkins │ v1.37.0 │ 29 Sep 25 12:21 UTC │ 29 Sep 25 12:21 UTC │
	│ addons     │ addons-752861 addons disable ingress-dns --alsologtostderr -v=1                                                   │ addons-752861    │ jenkins │ v1.37.0 │ 29 Sep 25 12:21 UTC │ 29 Sep 25 12:21 UTC │
	│ addons     │ addons-752861 addons disable yakd --alsologtostderr -v=1                                                          │ addons-752861    │ jenkins │ v1.37.0 │ 29 Sep 25 12:21 UTC │ 29 Sep 25 12:21 UTC │
	│ addons     │ addons-752861 addons disable ingress --alsologtostderr -v=1                                                       │ addons-752861    │ jenkins │ v1.37.0 │ 29 Sep 25 12:21 UTC │ 29 Sep 25 12:21 UTC │
	│ addons     │ addons-752861 addons disable volumesnapshots --alsologtostderr -v=1                                               │ addons-752861    │ jenkins │ v1.37.0 │ 29 Sep 25 12:22 UTC │ 29 Sep 25 12:22 UTC │
	│ addons     │ addons-752861 addons disable csi-hostpath-driver --alsologtostderr -v=1                                           │ addons-752861    │ jenkins │ v1.37.0 │ 29 Sep 25 12:22 UTC │ 29 Sep 25 12:22 UTC │
	│ stop       │ -p addons-752861                                                                                                  │ addons-752861    │ jenkins │ v1.37.0 │ 29 Sep 25 12:22 UTC │ 29 Sep 25 12:22 UTC │
	│ addons     │ enable dashboard -p addons-752861                                                                                 │ addons-752861    │ jenkins │ v1.37.0 │ 29 Sep 25 12:22 UTC │ 29 Sep 25 12:22 UTC │
	│ addons     │ disable dashboard -p addons-752861                                                                                │ addons-752861    │ jenkins │ v1.37.0 │ 29 Sep 25 12:22 UTC │ 29 Sep 25 12:22 UTC │
	│ addons     │ disable gvisor -p addons-752861                                                                                   │ addons-752861    │ jenkins │ v1.37.0 │ 29 Sep 25 12:22 UTC │ 29 Sep 25 12:22 UTC │
	│ delete     │ -p addons-752861                                                                                                  │ addons-752861    │ jenkins │ v1.37.0 │ 29 Sep 25 12:22 UTC │ 29 Sep 25 12:22 UTC │
	│ start      │ -p dockerenv-230733 --driver=docker  --container-runtime=containerd                                               │ dockerenv-230733 │ jenkins │ v1.37.0 │ 29 Sep 25 12:22 UTC │ 29 Sep 25 12:23 UTC │
	│ docker-env │ --ssh-host --ssh-add -p dockerenv-230733                                                                          │ dockerenv-230733 │ jenkins │ v1.37.0 │ 29 Sep 25 12:23 UTC │ 29 Sep 25 12:23 UTC │
	└────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 12:22:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 12:22:36.693841 1123601 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:22:36.693930 1123601 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:22:36.693933 1123601 out.go:374] Setting ErrFile to fd 2...
	I0929 12:22:36.693936 1123601 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:22:36.694138 1123601 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1097891/.minikube/bin
	I0929 12:22:36.694640 1123601 out.go:368] Setting JSON to false
	I0929 12:22:36.695618 1123601 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":18294,"bootTime":1759130263,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 12:22:36.695702 1123601 start.go:140] virtualization: kvm guest
	I0929 12:22:36.697696 1123601 out.go:179] * [dockerenv-230733] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 12:22:36.698740 1123601 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 12:22:36.698788 1123601 notify.go:220] Checking for updates...
	I0929 12:22:36.700874 1123601 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:22:36.701924 1123601 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 12:22:36.702906 1123601 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1097891/.minikube
	I0929 12:22:36.704018 1123601 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 12:22:36.705089 1123601 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 12:22:36.706309 1123601 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:22:36.731173 1123601 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 12:22:36.731267 1123601 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:22:36.787368 1123601 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-29 12:22:36.777198299 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:22:36.787483 1123601 docker.go:318] overlay module found
	I0929 12:22:36.789442 1123601 out.go:179] * Using the docker driver based on user configuration
	I0929 12:22:36.790403 1123601 start.go:304] selected driver: docker
	I0929 12:22:36.790412 1123601 start.go:924] validating driver "docker" against <nil>
	I0929 12:22:36.790424 1123601 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 12:22:36.790554 1123601 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:22:36.845625 1123601 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-29 12:22:36.835238116 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:22:36.845779 1123601 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 12:22:36.846351 1123601 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0929 12:22:36.846505 1123601 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0929 12:22:36.847882 1123601 out.go:179] * Using Docker driver with root privileges
	I0929 12:22:36.848718 1123601 cni.go:84] Creating CNI manager for ""
	I0929 12:22:36.848771 1123601 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0929 12:22:36.848780 1123601 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0929 12:22:36.848845 1123601 start.go:348] cluster config:
	{Name:dockerenv-230733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:dockerenv-230733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:22:36.849864 1123601 out.go:179] * Starting "dockerenv-230733" primary control-plane node in "dockerenv-230733" cluster
	I0929 12:22:36.850641 1123601 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0929 12:22:36.851490 1123601 out.go:179] * Pulling base image v0.0.48 ...
	I0929 12:22:36.852426 1123601 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0929 12:22:36.852462 1123601 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0929 12:22:36.852470 1123601 cache.go:58] Caching tarball of preloaded images
	I0929 12:22:36.852560 1123601 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 12:22:36.852580 1123601 preload.go:172] Found /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0929 12:22:36.852586 1123601 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0929 12:22:36.852918 1123601 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/dockerenv-230733/config.json ...
	I0929 12:22:36.852935 1123601 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/dockerenv-230733/config.json: {Name:mk26740fc2ea6e22a5ebeefb57f0d750315dc0d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:22:36.873049 1123601 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 12:22:36.873061 1123601 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 12:22:36.873077 1123601 cache.go:232] Successfully downloaded all kic artifacts
	I0929 12:22:36.873102 1123601 start.go:360] acquireMachinesLock for dockerenv-230733: {Name:mk0cf6cef2a3d724166af51253a11990193ed097 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:22:36.873200 1123601 start.go:364] duration metric: took 83.553µs to acquireMachinesLock for "dockerenv-230733"
	I0929 12:22:36.873219 1123601 start.go:93] Provisioning new machine with config: &{Name:dockerenv-230733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:dockerenv-230733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPU
s: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0929 12:22:36.873274 1123601 start.go:125] createHost starting for "" (driver="docker")
	I0929 12:22:36.874986 1123601 out.go:252] * Creating docker container (CPUs=2, Memory=8000MB) ...
	I0929 12:22:36.875180 1123601 start.go:159] libmachine.API.Create for "dockerenv-230733" (driver="docker")
	I0929 12:22:36.875205 1123601 client.go:168] LocalClient.Create starting
	I0929 12:22:36.875295 1123601 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem
	I0929 12:22:36.875322 1123601 main.go:141] libmachine: Decoding PEM data...
	I0929 12:22:36.875336 1123601 main.go:141] libmachine: Parsing certificate...
	I0929 12:22:36.875392 1123601 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem
	I0929 12:22:36.875406 1123601 main.go:141] libmachine: Decoding PEM data...
	I0929 12:22:36.875412 1123601 main.go:141] libmachine: Parsing certificate...
	I0929 12:22:36.875704 1123601 cli_runner.go:164] Run: docker network inspect dockerenv-230733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0929 12:22:36.892386 1123601 cli_runner.go:211] docker network inspect dockerenv-230733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0929 12:22:36.892453 1123601 network_create.go:284] running [docker network inspect dockerenv-230733] to gather additional debugging logs...
	I0929 12:22:36.892470 1123601 cli_runner.go:164] Run: docker network inspect dockerenv-230733
	W0929 12:22:36.909686 1123601 cli_runner.go:211] docker network inspect dockerenv-230733 returned with exit code 1
	I0929 12:22:36.909709 1123601 network_create.go:287] error running [docker network inspect dockerenv-230733]: docker network inspect dockerenv-230733: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network dockerenv-230733 not found
	I0929 12:22:36.909723 1123601 network_create.go:289] output of [docker network inspect dockerenv-230733]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network dockerenv-230733 not found
	
	** /stderr **
	I0929 12:22:36.909812 1123601 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 12:22:36.926550 1123601 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a10f70}
	I0929 12:22:36.926588 1123601 network_create.go:124] attempt to create docker network dockerenv-230733 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0929 12:22:36.926636 1123601 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=dockerenv-230733 dockerenv-230733
	I0929 12:22:36.980722 1123601 network_create.go:108] docker network dockerenv-230733 192.168.49.0/24 created
	I0929 12:22:36.980746 1123601 kic.go:121] calculated static IP "192.168.49.2" for the "dockerenv-230733" container
	I0929 12:22:36.980822 1123601 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 12:22:36.997181 1123601 cli_runner.go:164] Run: docker volume create dockerenv-230733 --label name.minikube.sigs.k8s.io=dockerenv-230733 --label created_by.minikube.sigs.k8s.io=true
	I0929 12:22:37.014710 1123601 oci.go:103] Successfully created a docker volume dockerenv-230733
	I0929 12:22:37.014794 1123601 cli_runner.go:164] Run: docker run --rm --name dockerenv-230733-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-230733 --entrypoint /usr/bin/test -v dockerenv-230733:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 12:22:37.409588 1123601 oci.go:107] Successfully prepared a docker volume dockerenv-230733
	I0929 12:22:37.409647 1123601 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0929 12:22:37.409669 1123601 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 12:22:37.409753 1123601 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v dockerenv-230733:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 12:22:41.817558 1123601 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v dockerenv-230733:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.40775117s)
	I0929 12:22:41.817584 1123601 kic.go:203] duration metric: took 4.407909917s to extract preloaded images to volume ...
	W0929 12:22:41.817687 1123601 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0929 12:22:41.817719 1123601 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0929 12:22:41.817771 1123601 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 12:22:41.872927 1123601 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname dockerenv-230733 --name dockerenv-230733 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-230733 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=dockerenv-230733 --network dockerenv-230733 --ip 192.168.49.2 --volume dockerenv-230733:/var --security-opt apparmor=unconfined --memory=8000mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 12:22:42.119639 1123601 cli_runner.go:164] Run: docker container inspect dockerenv-230733 --format={{.State.Running}}
	I0929 12:22:42.138183 1123601 cli_runner.go:164] Run: docker container inspect dockerenv-230733 --format={{.State.Status}}
	I0929 12:22:42.156750 1123601 cli_runner.go:164] Run: docker exec dockerenv-230733 stat /var/lib/dpkg/alternatives/iptables
	I0929 12:22:42.206322 1123601 oci.go:144] the created container "dockerenv-230733" has a running status.
	I0929 12:22:42.206344 1123601 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/dockerenv-230733/id_rsa...
	I0929 12:22:42.488450 1123601 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/dockerenv-230733/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 12:22:42.512207 1123601 cli_runner.go:164] Run: docker container inspect dockerenv-230733 --format={{.State.Status}}
	I0929 12:22:42.529211 1123601 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 12:22:42.529223 1123601 kic_runner.go:114] Args: [docker exec --privileged dockerenv-230733 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 12:22:42.576419 1123601 cli_runner.go:164] Run: docker container inspect dockerenv-230733 --format={{.State.Status}}
	I0929 12:22:42.593202 1123601 machine.go:93] provisionDockerMachine start ...
	I0929 12:22:42.593302 1123601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-230733
	I0929 12:22:42.609771 1123601 main.go:141] libmachine: Using SSH client type: native
	I0929 12:22:42.610068 1123601 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33266 <nil> <nil>}
	I0929 12:22:42.610077 1123601 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 12:22:42.610709 1123601 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37986->127.0.0.1:33266: read: connection reset by peer
	I0929 12:22:45.750205 1123601 main.go:141] libmachine: SSH cmd err, output: <nil>: dockerenv-230733
	
	I0929 12:22:45.750242 1123601 ubuntu.go:182] provisioning hostname "dockerenv-230733"
	I0929 12:22:45.750319 1123601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-230733
	I0929 12:22:45.769159 1123601 main.go:141] libmachine: Using SSH client type: native
	I0929 12:22:45.769448 1123601 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33266 <nil> <nil>}
	I0929 12:22:45.769459 1123601 main.go:141] libmachine: About to run SSH command:
	sudo hostname dockerenv-230733 && echo "dockerenv-230733" | sudo tee /etc/hostname
	I0929 12:22:45.918298 1123601 main.go:141] libmachine: SSH cmd err, output: <nil>: dockerenv-230733
	
	I0929 12:22:45.918388 1123601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-230733
	I0929 12:22:45.935860 1123601 main.go:141] libmachine: Using SSH client type: native
	I0929 12:22:45.936105 1123601 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33266 <nil> <nil>}
	I0929 12:22:45.936116 1123601 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdockerenv-230733' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 dockerenv-230733/g' /etc/hosts;
				else 
					echo '127.0.1.1 dockerenv-230733' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 12:22:46.071386 1123601 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 12:22:46.071412 1123601 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-1097891/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-1097891/.minikube}
	I0929 12:22:46.071473 1123601 ubuntu.go:190] setting up certificates
	I0929 12:22:46.071490 1123601 provision.go:84] configureAuth start
	I0929 12:22:46.071565 1123601 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-230733
	I0929 12:22:46.089364 1123601 provision.go:143] copyHostCerts
	I0929 12:22:46.089419 1123601 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1097891/.minikube/cert.pem, removing ...
	I0929 12:22:46.089429 1123601 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1097891/.minikube/cert.pem
	I0929 12:22:46.089514 1123601 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-1097891/.minikube/cert.pem (1123 bytes)
	I0929 12:22:46.089653 1123601 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1097891/.minikube/key.pem, removing ...
	I0929 12:22:46.089660 1123601 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1097891/.minikube/key.pem
	I0929 12:22:46.089697 1123601 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-1097891/.minikube/key.pem (1679 bytes)
	I0929 12:22:46.089845 1123601 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.pem, removing ...
	I0929 12:22:46.089853 1123601 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.pem
	I0929 12:22:46.089887 1123601 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.pem (1078 bytes)
	I0929 12:22:46.090015 1123601 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca-key.pem org=jenkins.dockerenv-230733 san=[127.0.0.1 192.168.49.2 dockerenv-230733 localhost minikube]
	I0929 12:22:46.325627 1123601 provision.go:177] copyRemoteCerts
	I0929 12:22:46.325687 1123601 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 12:22:46.325728 1123601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-230733
	I0929 12:22:46.343797 1123601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33266 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/dockerenv-230733/id_rsa Username:docker}
	I0929 12:22:46.442220 1123601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 12:22:46.469896 1123601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0929 12:22:46.495552 1123601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 12:22:46.521669 1123601 provision.go:87] duration metric: took 450.142859ms to configureAuth
	I0929 12:22:46.521695 1123601 ubuntu.go:206] setting minikube options for container-runtime
	I0929 12:22:46.521862 1123601 config.go:182] Loaded profile config "dockerenv-230733": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 12:22:46.521873 1123601 machine.go:96] duration metric: took 3.928659417s to provisionDockerMachine
	I0929 12:22:46.521879 1123601 client.go:171] duration metric: took 9.646669723s to LocalClient.Create
	I0929 12:22:46.521900 1123601 start.go:167] duration metric: took 9.646720979s to libmachine.API.Create "dockerenv-230733"
	I0929 12:22:46.521906 1123601 start.go:293] postStartSetup for "dockerenv-230733" (driver="docker")
	I0929 12:22:46.521913 1123601 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 12:22:46.521972 1123601 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 12:22:46.522006 1123601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-230733
	I0929 12:22:46.539560 1123601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33266 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/dockerenv-230733/id_rsa Username:docker}
	I0929 12:22:46.640278 1123601 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 12:22:46.644135 1123601 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 12:22:46.644163 1123601 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 12:22:46.644174 1123601 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 12:22:46.644183 1123601 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 12:22:46.644194 1123601 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1097891/.minikube/addons for local assets ...
	I0929 12:22:46.644251 1123601 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1097891/.minikube/files for local assets ...
	I0929 12:22:46.644267 1123601 start.go:296] duration metric: took 122.356685ms for postStartSetup
	I0929 12:22:46.644569 1123601 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-230733
	I0929 12:22:46.662334 1123601 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/dockerenv-230733/config.json ...
	I0929 12:22:46.662631 1123601 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 12:22:46.662673 1123601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-230733
	I0929 12:22:46.680490 1123601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33266 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/dockerenv-230733/id_rsa Username:docker}
	I0929 12:22:46.775318 1123601 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 12:22:46.780067 1123601 start.go:128] duration metric: took 9.906775794s to createHost
	I0929 12:22:46.780083 1123601 start.go:83] releasing machines lock for "dockerenv-230733", held for 9.906877166s
	I0929 12:22:46.780175 1123601 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-230733
	I0929 12:22:46.797552 1123601 ssh_runner.go:195] Run: cat /version.json
	I0929 12:22:46.797590 1123601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-230733
	I0929 12:22:46.797618 1123601 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 12:22:46.797688 1123601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-230733
	I0929 12:22:46.815656 1123601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33266 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/dockerenv-230733/id_rsa Username:docker}
	I0929 12:22:46.816364 1123601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33266 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/dockerenv-230733/id_rsa Username:docker}
	I0929 12:22:46.907335 1123601 ssh_runner.go:195] Run: systemctl --version
	I0929 12:22:46.979801 1123601 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 12:22:46.985013 1123601 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 12:22:47.015388 1123601 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 12:22:47.015454 1123601 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 12:22:47.044586 1123601 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 12:22:47.044604 1123601 start.go:495] detecting cgroup driver to use...
	I0929 12:22:47.044638 1123601 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 12:22:47.044681 1123601 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0929 12:22:47.058689 1123601 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 12:22:47.071130 1123601 docker.go:218] disabling cri-docker service (if available) ...
	I0929 12:22:47.071200 1123601 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 12:22:47.085862 1123601 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 12:22:47.100249 1123601 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 12:22:47.168650 1123601 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 12:22:47.236797 1123601 docker.go:234] disabling docker service ...
	I0929 12:22:47.236852 1123601 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 12:22:47.255742 1123601 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 12:22:47.267676 1123601 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 12:22:47.340023 1123601 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 12:22:47.406837 1123601 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 12:22:47.419147 1123601 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 12:22:47.438330 1123601 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 12:22:47.450716 1123601 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 12:22:47.461723 1123601 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0929 12:22:47.461774 1123601 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0929 12:22:47.472876 1123601 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 12:22:47.483692 1123601 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 12:22:47.494681 1123601 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 12:22:47.505632 1123601 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 12:22:47.515629 1123601 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 12:22:47.526428 1123601 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 12:22:47.536799 1123601 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 12:22:47.547530 1123601 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 12:22:47.556751 1123601 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 12:22:47.566383 1123601 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:22:47.633613 1123601 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 12:22:47.738193 1123601 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0929 12:22:47.738275 1123601 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0929 12:22:47.742471 1123601 start.go:563] Will wait 60s for crictl version
	I0929 12:22:47.742517 1123601 ssh_runner.go:195] Run: which crictl
	I0929 12:22:47.746007 1123601 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 12:22:47.780555 1123601 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0929 12:22:47.780614 1123601 ssh_runner.go:195] Run: containerd --version
	I0929 12:22:47.804314 1123601 ssh_runner.go:195] Run: containerd --version
	I0929 12:22:47.829387 1123601 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0929 12:22:47.830526 1123601 cli_runner.go:164] Run: docker network inspect dockerenv-230733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 12:22:47.847165 1123601 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0929 12:22:47.851612 1123601 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 12:22:47.863679 1123601 kubeadm.go:875] updating cluster {Name:dockerenv-230733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:dockerenv-230733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Auto
PauseInterval:1m0s} ...
	I0929 12:22:47.863779 1123601 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0929 12:22:47.863821 1123601 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 12:22:47.898408 1123601 containerd.go:627] all images are preloaded for containerd runtime.
	I0929 12:22:47.898421 1123601 containerd.go:534] Images already preloaded, skipping extraction
	I0929 12:22:47.898476 1123601 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 12:22:47.932883 1123601 containerd.go:627] all images are preloaded for containerd runtime.
	I0929 12:22:47.932896 1123601 cache_images.go:85] Images are preloaded, skipping loading
	I0929 12:22:47.932902 1123601 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 containerd true true} ...
	I0929 12:22:47.933037 1123601 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=dockerenv-230733 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:dockerenv-230733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 12:22:47.933095 1123601 ssh_runner.go:195] Run: sudo crictl info
	I0929 12:22:47.967936 1123601 cni.go:84] Creating CNI manager for ""
	I0929 12:22:47.967949 1123601 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0929 12:22:47.967975 1123601 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 12:22:47.968000 1123601 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:dockerenv-230733 NodeName:dockerenv-230733 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 12:22:47.968129 1123601 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "dockerenv-230733"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 12:22:47.968188 1123601 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 12:22:47.978089 1123601 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 12:22:47.978136 1123601 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 12:22:47.987258 1123601 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0929 12:22:48.005878 1123601 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 12:22:48.027029 1123601 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I0929 12:22:48.045544 1123601 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0929 12:22:48.049172 1123601 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 12:22:48.060600 1123601 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:22:48.128710 1123601 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:22:48.151353 1123601 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/dockerenv-230733 for IP: 192.168.49.2
	I0929 12:22:48.151365 1123601 certs.go:194] generating shared ca certs ...
	I0929 12:22:48.151380 1123601 certs.go:226] acquiring lock for ca certs: {Name:mk80f04796163f71154dbe6468cabd937b3d9c9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:22:48.151518 1123601 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.key
	I0929 12:22:48.151549 1123601 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/proxy-client-ca.key
	I0929 12:22:48.151554 1123601 certs.go:256] generating profile certs ...
	I0929 12:22:48.151606 1123601 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/dockerenv-230733/client.key
	I0929 12:22:48.151621 1123601 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/dockerenv-230733/client.crt with IP's: []
	I0929 12:22:49.018750 1123601 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/dockerenv-230733/client.crt ...
	I0929 12:22:49.018771 1123601 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/dockerenv-230733/client.crt: {Name:mkfd531eb57ba7103a4bdc4fc3115c379312c6d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:22:49.019000 1123601 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/dockerenv-230733/client.key ...
	I0929 12:22:49.019017 1123601 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/dockerenv-230733/client.key: {Name:mkf56b7bfb4a686b41218b2b5151f2e27a0db58b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:22:49.019155 1123601 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/dockerenv-230733/apiserver.key.ba73ab82
	I0929 12:22:49.019169 1123601 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/dockerenv-230733/apiserver.crt.ba73ab82 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0929 12:22:49.238547 1123601 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/dockerenv-230733/apiserver.crt.ba73ab82 ...
	I0929 12:22:49.238566 1123601 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/dockerenv-230733/apiserver.crt.ba73ab82: {Name:mkf61b77eff3511633deca0f7067674cfda8f31b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:22:49.238736 1123601 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/dockerenv-230733/apiserver.key.ba73ab82 ...
	I0929 12:22:49.238746 1123601 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/dockerenv-230733/apiserver.key.ba73ab82: {Name:mkc6833e5d089502612874c7e894425be9147069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:22:49.238819 1123601 certs.go:381] copying /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/dockerenv-230733/apiserver.crt.ba73ab82 -> /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/dockerenv-230733/apiserver.crt
	I0929 12:22:49.238914 1123601 certs.go:385] copying /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/dockerenv-230733/apiserver.key.ba73ab82 -> /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/dockerenv-230733/apiserver.key
	I0929 12:22:49.238988 1123601 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/dockerenv-230733/proxy-client.key
	I0929 12:22:49.239001 1123601 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/dockerenv-230733/proxy-client.crt with IP's: []
	I0929 12:22:49.518744 1123601 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/dockerenv-230733/proxy-client.crt ...
	I0929 12:22:49.518761 1123601 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/dockerenv-230733/proxy-client.crt: {Name:mk19576351b75f57f5ddc9ebf92c3d6408079f64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:22:49.518925 1123601 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/dockerenv-230733/proxy-client.key ...
	I0929 12:22:49.518933 1123601 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/dockerenv-230733/proxy-client.key: {Name:mk5a53ab29a0f10acf8e1e9667a30893e1d99648 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:22:49.519164 1123601 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 12:22:49.519199 1123601 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem (1078 bytes)
	I0929 12:22:49.519224 1123601 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem (1123 bytes)
	I0929 12:22:49.519243 1123601 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/key.pem (1679 bytes)
	I0929 12:22:49.519838 1123601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 12:22:49.545718 1123601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I0929 12:22:49.569728 1123601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 12:22:49.593893 1123601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 12:22:49.618455 1123601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/dockerenv-230733/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0929 12:22:49.642343 1123601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/dockerenv-230733/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0929 12:22:49.667283 1123601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/dockerenv-230733/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 12:22:49.691078 1123601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/dockerenv-230733/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 12:22:49.714655 1123601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 12:22:49.741396 1123601 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 12:22:49.760014 1123601 ssh_runner.go:195] Run: openssl version
	I0929 12:22:49.766613 1123601 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 12:22:49.779823 1123601 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:22:49.784056 1123601 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 12:18 /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:22:49.784101 1123601 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:22:49.792267 1123601 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 12:22:49.802100 1123601 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 12:22:49.805503 1123601 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 12:22:49.805542 1123601 kubeadm.go:392] StartCluster: {Name:dockerenv-230733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:dockerenv-230733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPau
seInterval:1m0s}
	I0929 12:22:49.805619 1123601 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0929 12:22:49.805660 1123601 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 12:22:49.840807 1123601 cri.go:89] found id: ""
	I0929 12:22:49.840856 1123601 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 12:22:49.850348 1123601 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 12:22:49.859500 1123601 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0929 12:22:49.859541 1123601 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 12:22:49.868237 1123601 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 12:22:49.868244 1123601 kubeadm.go:157] found existing configuration files:
	
	I0929 12:22:49.868277 1123601 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 12:22:49.876815 1123601 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 12:22:49.876849 1123601 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 12:22:49.885034 1123601 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 12:22:49.893642 1123601 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 12:22:49.893698 1123601 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 12:22:49.902332 1123601 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 12:22:49.911150 1123601 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 12:22:49.911187 1123601 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 12:22:49.919588 1123601 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 12:22:49.928155 1123601 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 12:22:49.928189 1123601 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 12:22:49.936446 1123601 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0929 12:22:49.974154 1123601 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 12:22:49.974238 1123601 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 12:22:49.989080 1123601 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0929 12:22:49.989134 1123601 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1040-gcp
	I0929 12:22:49.989182 1123601 kubeadm.go:310] OS: Linux
	I0929 12:22:49.989220 1123601 kubeadm.go:310] CGROUPS_CPU: enabled
	I0929 12:22:49.989303 1123601 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0929 12:22:49.989373 1123601 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0929 12:22:49.989413 1123601 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0929 12:22:49.989462 1123601 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0929 12:22:49.989506 1123601 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0929 12:22:49.989562 1123601 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0929 12:22:49.989621 1123601 kubeadm.go:310] CGROUPS_IO: enabled
	I0929 12:22:50.040984 1123601 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 12:22:50.041130 1123601 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 12:22:50.041273 1123601 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 12:22:50.047393 1123601 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 12:22:50.049218 1123601 out.go:252]   - Generating certificates and keys ...
	I0929 12:22:50.049289 1123601 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 12:22:50.049343 1123601 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 12:22:50.192815 1123601 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 12:22:50.464124 1123601 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 12:22:50.852165 1123601 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 12:22:51.197748 1123601 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 12:22:51.579452 1123601 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 12:22:51.579580 1123601 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [dockerenv-230733 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 12:22:51.946140 1123601 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 12:22:51.946254 1123601 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [dockerenv-230733 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 12:22:52.195288 1123601 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 12:22:52.277455 1123601 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 12:22:52.705471 1123601 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 12:22:52.705527 1123601 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 12:22:53.046731 1123601 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 12:22:53.296477 1123601 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 12:22:53.512297 1123601 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 12:22:53.844404 1123601 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 12:22:53.920091 1123601 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 12:22:53.920515 1123601 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 12:22:53.924346 1123601 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 12:22:53.925777 1123601 out.go:252]   - Booting up control plane ...
	I0929 12:22:53.925871 1123601 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 12:22:53.925987 1123601 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 12:22:53.926812 1123601 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 12:22:53.937150 1123601 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 12:22:53.937280 1123601 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 12:22:53.944178 1123601 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 12:22:53.944444 1123601 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 12:22:53.944502 1123601 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 12:22:54.022049 1123601 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 12:22:54.022168 1123601 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 12:22:55.023880 1123601 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001909197s
	I0929 12:22:55.026934 1123601 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 12:22:55.027120 1123601 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0929 12:22:55.027253 1123601 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 12:22:55.027368 1123601 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 12:22:56.969944 1123601 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 1.942883321s
	I0929 12:22:57.010513 1123601 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.983280229s
	I0929 12:22:58.528305 1123601 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 3.501252771s
	I0929 12:22:58.538291 1123601 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 12:22:58.547688 1123601 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 12:22:58.555715 1123601 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 12:22:58.555950 1123601 kubeadm.go:310] [mark-control-plane] Marking the node dockerenv-230733 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 12:22:58.565162 1123601 kubeadm.go:310] [bootstrap-token] Using token: 7fnxcj.lx7vmmfllrfp6tby
	I0929 12:22:58.566469 1123601 out.go:252]   - Configuring RBAC rules ...
	I0929 12:22:58.566607 1123601 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 12:22:58.569852 1123601 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 12:22:58.576186 1123601 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 12:22:58.578660 1123601 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 12:22:58.581077 1123601 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 12:22:58.583380 1123601 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 12:22:58.934770 1123601 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 12:22:59.355326 1123601 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 12:22:59.935075 1123601 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 12:22:59.935958 1123601 kubeadm.go:310] 
	I0929 12:22:59.936075 1123601 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 12:22:59.936085 1123601 kubeadm.go:310] 
	I0929 12:22:59.936185 1123601 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 12:22:59.936191 1123601 kubeadm.go:310] 
	I0929 12:22:59.936230 1123601 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 12:22:59.936303 1123601 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 12:22:59.936373 1123601 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 12:22:59.936379 1123601 kubeadm.go:310] 
	I0929 12:22:59.936454 1123601 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 12:22:59.936459 1123601 kubeadm.go:310] 
	I0929 12:22:59.936505 1123601 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 12:22:59.936511 1123601 kubeadm.go:310] 
	I0929 12:22:59.936550 1123601 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 12:22:59.936638 1123601 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 12:22:59.936729 1123601 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 12:22:59.936736 1123601 kubeadm.go:310] 
	I0929 12:22:59.936813 1123601 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 12:22:59.936928 1123601 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 12:22:59.936932 1123601 kubeadm.go:310] 
	I0929 12:22:59.937021 1123601 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7fnxcj.lx7vmmfllrfp6tby \
	I0929 12:22:59.937160 1123601 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e31917eb19b7c0879803010df843d835ccee1dda0a35b4d1611c13a53effe46e \
	I0929 12:22:59.937186 1123601 kubeadm.go:310] 	--control-plane 
	I0929 12:22:59.937194 1123601 kubeadm.go:310] 
	I0929 12:22:59.937272 1123601 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 12:22:59.937275 1123601 kubeadm.go:310] 
	I0929 12:22:59.937346 1123601 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7fnxcj.lx7vmmfllrfp6tby \
	I0929 12:22:59.937434 1123601 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e31917eb19b7c0879803010df843d835ccee1dda0a35b4d1611c13a53effe46e 
	I0929 12:22:59.940294 1123601 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1040-gcp\n", err: exit status 1
	I0929 12:22:59.940398 1123601 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 12:22:59.940423 1123601 cni.go:84] Creating CNI manager for ""
	I0929 12:22:59.940431 1123601 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0929 12:22:59.942423 1123601 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0929 12:22:59.943581 1123601 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0929 12:22:59.948029 1123601 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0929 12:22:59.948038 1123601 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0929 12:22:59.968358 1123601 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0929 12:23:00.181080 1123601 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 12:23:00.181187 1123601 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:23:00.181222 1123601 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes dockerenv-230733 minikube.k8s.io/updated_at=2025_09_29T12_23_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e minikube.k8s.io/name=dockerenv-230733 minikube.k8s.io/primary=true
	I0929 12:23:00.190047 1123601 ops.go:34] apiserver oom_adj: -16
	I0929 12:23:00.257957 1123601 kubeadm.go:1105] duration metric: took 76.861142ms to wait for elevateKubeSystemPrivileges
	I0929 12:23:00.265979 1123601 kubeadm.go:394] duration metric: took 10.460414929s to StartCluster
	I0929 12:23:00.266019 1123601 settings.go:142] acquiring lock: {Name:mk967ab7b412f5ea13a8bdbc3d08e00d0ec4417f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:23:00.266096 1123601 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 12:23:00.266799 1123601 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/kubeconfig: {Name:mk343611c88fd6ad36810bb377f9a0ca463784db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:23:00.267130 1123601 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 12:23:00.267145 1123601 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0929 12:23:00.267233 1123601 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 12:23:00.267342 1123601 addons.go:69] Setting storage-provisioner=true in profile "dockerenv-230733"
	I0929 12:23:00.267365 1123601 addons.go:238] Setting addon storage-provisioner=true in "dockerenv-230733"
	I0929 12:23:00.267375 1123601 addons.go:69] Setting default-storageclass=true in profile "dockerenv-230733"
	I0929 12:23:00.267389 1123601 config.go:182] Loaded profile config "dockerenv-230733": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 12:23:00.267420 1123601 host.go:66] Checking if "dockerenv-230733" exists ...
	I0929 12:23:00.267423 1123601 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "dockerenv-230733"
	I0929 12:23:00.267982 1123601 cli_runner.go:164] Run: docker container inspect dockerenv-230733 --format={{.State.Status}}
	I0929 12:23:00.268044 1123601 cli_runner.go:164] Run: docker container inspect dockerenv-230733 --format={{.State.Status}}
	I0929 12:23:00.268696 1123601 out.go:179] * Verifying Kubernetes components...
	I0929 12:23:00.269918 1123601 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:23:00.293184 1123601 addons.go:238] Setting addon default-storageclass=true in "dockerenv-230733"
	I0929 12:23:00.293219 1123601 host.go:66] Checking if "dockerenv-230733" exists ...
	I0929 12:23:00.293592 1123601 cli_runner.go:164] Run: docker container inspect dockerenv-230733 --format={{.State.Status}}
	I0929 12:23:00.293722 1123601 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 12:23:00.294987 1123601 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:23:00.295000 1123601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 12:23:00.295058 1123601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-230733
	I0929 12:23:00.324469 1123601 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 12:23:00.324481 1123601 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 12:23:00.324531 1123601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-230733
	I0929 12:23:00.325460 1123601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33266 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/dockerenv-230733/id_rsa Username:docker}
	I0929 12:23:00.346619 1123601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33266 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/dockerenv-230733/id_rsa Username:docker}
	I0929 12:23:00.354779 1123601 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 12:23:00.396849 1123601 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:23:00.441129 1123601 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:23:00.461208 1123601 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 12:23:00.518941 1123601 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0929 12:23:00.520010 1123601 api_server.go:52] waiting for apiserver process to appear ...
	I0929 12:23:00.520070 1123601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:23:00.723750 1123601 api_server.go:72] duration metric: took 456.568333ms to wait for apiserver process to appear ...
	I0929 12:23:00.723773 1123601 api_server.go:88] waiting for apiserver healthz status ...
	I0929 12:23:00.723797 1123601 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0929 12:23:00.730463 1123601 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0929 12:23:00.731403 1123601 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0929 12:23:00.731457 1123601 api_server.go:141] control plane version: v1.34.0
	I0929 12:23:00.731471 1123601 api_server.go:131] duration metric: took 7.692466ms to wait for apiserver health ...
	I0929 12:23:00.731479 1123601 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 12:23:00.732909 1123601 addons.go:514] duration metric: took 465.684015ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0929 12:23:00.736058 1123601 system_pods.go:59] 5 kube-system pods found
	I0929 12:23:00.736107 1123601 system_pods.go:61] "etcd-dockerenv-230733" [4e5d069f-0c6d-42e0-9438-8cc3c9f00109] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:23:00.736115 1123601 system_pods.go:61] "kube-apiserver-dockerenv-230733" [87d1fb6f-d90e-4a9b-bdf2-b0ea97f95ee9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:23:00.736122 1123601 system_pods.go:61] "kube-controller-manager-dockerenv-230733" [b7c91a8e-7c40-4135-817e-33d04e6a5fb6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:23:00.736127 1123601 system_pods.go:61] "kube-scheduler-dockerenv-230733" [1a810e4b-b562-4616-9461-7451ca265073] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:23:00.736130 1123601 system_pods.go:61] "storage-provisioner" [f94b79a4-7245-46ba-bfd1-0aaa2d1710e1] Pending
	I0929 12:23:00.736135 1123601 system_pods.go:74] duration metric: took 4.65169ms to wait for pod list to return data ...
	I0929 12:23:00.736147 1123601 kubeadm.go:578] duration metric: took 468.972438ms to wait for: map[apiserver:true system_pods:true]
	I0929 12:23:00.736164 1123601 node_conditions.go:102] verifying NodePressure condition ...
	I0929 12:23:00.739093 1123601 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 12:23:00.739114 1123601 node_conditions.go:123] node cpu capacity is 8
	I0929 12:23:00.739124 1123601 node_conditions.go:105] duration metric: took 2.956422ms to run NodePressure ...
	I0929 12:23:00.739135 1123601 start.go:241] waiting for startup goroutines ...
	I0929 12:23:01.023632 1123601 kapi.go:214] "coredns" deployment in "kube-system" namespace and "dockerenv-230733" context rescaled to 1 replicas
	I0929 12:23:01.023668 1123601 start.go:246] waiting for cluster config update ...
	I0929 12:23:01.023682 1123601 start.go:255] writing updated cluster config ...
	I0929 12:23:01.024018 1123601 ssh_runner.go:195] Run: rm -f paused
	I0929 12:23:01.077474 1123601 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 12:23:01.079302 1123601 out.go:179] * Done! kubectl is now configured to use "dockerenv-230733" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ad1d104946cab       409467f978b4a       10 seconds ago      Running             kindnet-cni               0                   f8516c757f710       kindnet-5qdnx
	57860c205f919       df0860106674d       11 seconds ago      Running             kube-proxy                0                   a080a7855ba00       kube-proxy-9jcqr
	c84c28f31a71e       6e38f40d628db       11 seconds ago      Running             storage-provisioner       0                   5b4b35311d33e       storage-provisioner
	a6b8668c39546       a0af72f2ec6d6       21 seconds ago      Running             kube-controller-manager   0                   38c3cebb83a29       kube-controller-manager-dockerenv-230733
	e6f6f5148dc09       46169d968e920       21 seconds ago      Running             kube-scheduler            0                   292fb3084848a       kube-scheduler-dockerenv-230733
	212a1544b95df       5f1f5298c888d       21 seconds ago      Running             etcd                      0                   8127944541a24       etcd-dockerenv-230733
	0f2304e212803       90550c43ad2bc       21 seconds ago      Running             kube-apiserver            0                   f8ff66eec5e92       kube-apiserver-dockerenv-230733
	
	
	==> containerd <==
	Sep 29 12:22:55 dockerenv-230733 containerd[759]: time="2025-09-29T12:22:55.257323390Z" level=info msg="StartContainer for \"0f2304e212803d48fc789e13fa16dc16c548aacdce878325e9c00a75f331415a\" returns successfully"
	Sep 29 12:22:55 dockerenv-230733 containerd[759]: time="2025-09-29T12:22:55.263648869Z" level=info msg="StartContainer for \"212a1544b95dfd4d23fc7ea050c88ba58bff823adff2ab036ba58523b38dade0\" returns successfully"
	Sep 29 12:22:55 dockerenv-230733 containerd[759]: time="2025-09-29T12:22:55.272916135Z" level=info msg="StartContainer for \"e6f6f5148dc095031ae5cd99f9e34989e438be96b9689af9bb098f7699f1c252\" returns successfully"
	Sep 29 12:22:55 dockerenv-230733 containerd[759]: time="2025-09-29T12:22:55.293574755Z" level=info msg="StartContainer for \"a6b8668c39546baf4f25d322cd9e313e33e41b70d86993dc8dd2955b1db6d1a0\" returns successfully"
	Sep 29 12:23:05 dockerenv-230733 containerd[759]: time="2025-09-29T12:23:05.077325568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:f94b79a4-7245-46ba-bfd1-0aaa2d1710e1,Namespace:kube-system,Attempt:0,}"
	Sep 29 12:23:05 dockerenv-230733 containerd[759]: time="2025-09-29T12:23:05.168629743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:f94b79a4-7245-46ba-bfd1-0aaa2d1710e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b4b35311d33e039863e7826437d967a1b0194b4dd2cdf97ce4657cd857ab23e\""
	Sep 29 12:23:05 dockerenv-230733 containerd[759]: time="2025-09-29T12:23:05.178223900Z" level=info msg="CreateContainer within sandbox \"5b4b35311d33e039863e7826437d967a1b0194b4dd2cdf97ce4657cd857ab23e\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Sep 29 12:23:05 dockerenv-230733 containerd[759]: time="2025-09-29T12:23:05.189581272Z" level=info msg="CreateContainer within sandbox \"5b4b35311d33e039863e7826437d967a1b0194b4dd2cdf97ce4657cd857ab23e\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"c84c28f31a71ec98fac1a792f3163c0f3fe59a78d4b5383106a7da0a115fa313\""
	Sep 29 12:23:05 dockerenv-230733 containerd[759]: time="2025-09-29T12:23:05.190304919Z" level=info msg="StartContainer for \"c84c28f31a71ec98fac1a792f3163c0f3fe59a78d4b5383106a7da0a115fa313\""
	Sep 29 12:23:05 dockerenv-230733 containerd[759]: time="2025-09-29T12:23:05.256043337Z" level=info msg="StartContainer for \"c84c28f31a71ec98fac1a792f3163c0f3fe59a78d4b5383106a7da0a115fa313\" returns successfully"
	Sep 29 12:23:05 dockerenv-230733 containerd[759]: time="2025-09-29T12:23:05.452777184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9jcqr,Uid:7a6834b6-692f-402a-8c80-fad81170db86,Namespace:kube-system,Attempt:0,}"
	Sep 29 12:23:05 dockerenv-230733 containerd[759]: time="2025-09-29T12:23:05.456210226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-5qdnx,Uid:a7d4bee5-e0cb-4861-a3d3-423e8a3dc9c5,Namespace:kube-system,Attempt:0,}"
	Sep 29 12:23:05 dockerenv-230733 containerd[759]: time="2025-09-29T12:23:05.520785448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9jcqr,Uid:7a6834b6-692f-402a-8c80-fad81170db86,Namespace:kube-system,Attempt:0,} returns sandbox id \"a080a7855ba00a6c2d3cfb1c69a32f2f6e8e03043cd4ae326f24251c3593892f\""
	Sep 29 12:23:05 dockerenv-230733 containerd[759]: time="2025-09-29T12:23:05.526562108Z" level=info msg="CreateContainer within sandbox \"a080a7855ba00a6c2d3cfb1c69a32f2f6e8e03043cd4ae326f24251c3593892f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Sep 29 12:23:05 dockerenv-230733 containerd[759]: time="2025-09-29T12:23:05.536856487Z" level=info msg="CreateContainer within sandbox \"a080a7855ba00a6c2d3cfb1c69a32f2f6e8e03043cd4ae326f24251c3593892f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"57860c205f9190face9cf7183d878cc0936bc156dfe8288af8be0756fa78e6b6\""
	Sep 29 12:23:05 dockerenv-230733 containerd[759]: time="2025-09-29T12:23:05.537528439Z" level=info msg="StartContainer for \"57860c205f9190face9cf7183d878cc0936bc156dfe8288af8be0756fa78e6b6\""
	Sep 29 12:23:05 dockerenv-230733 containerd[759]: time="2025-09-29T12:23:05.550114678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-n625v,Uid:1c159786-cc99-400f-ad63-6c033da90abc,Namespace:kube-system,Attempt:0,}"
	Sep 29 12:23:05 dockerenv-230733 containerd[759]: time="2025-09-29T12:23:05.573276520Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-n625v,Uid:1c159786-cc99-400f-ad63-6c033da90abc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"22322ab230569bf6582d26b7f534aadd52cf5d978e0efca32e2d45ddc995b96b\": failed to find network info for sandbox \"22322ab230569bf6582d26b7f534aadd52cf5d978e0efca32e2d45ddc995b96b\""
	Sep 29 12:23:05 dockerenv-230733 containerd[759]: time="2025-09-29T12:23:05.614619545Z" level=info msg="StartContainer for \"57860c205f9190face9cf7183d878cc0936bc156dfe8288af8be0756fa78e6b6\" returns successfully"
	Sep 29 12:23:05 dockerenv-230733 containerd[759]: time="2025-09-29T12:23:05.775816174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-5qdnx,Uid:a7d4bee5-e0cb-4861-a3d3-423e8a3dc9c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8516c757f710a78b1c3040806f8f36a4fa49fc45c655ed67b89fd50b091c24f\""
	Sep 29 12:23:05 dockerenv-230733 containerd[759]: time="2025-09-29T12:23:05.781258449Z" level=info msg="CreateContainer within sandbox \"f8516c757f710a78b1c3040806f8f36a4fa49fc45c655ed67b89fd50b091c24f\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Sep 29 12:23:05 dockerenv-230733 containerd[759]: time="2025-09-29T12:23:05.793115306Z" level=info msg="CreateContainer within sandbox \"f8516c757f710a78b1c3040806f8f36a4fa49fc45c655ed67b89fd50b091c24f\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"ad1d104946cabf789b9e0cb715c46cbf6636cc68b0b131ce944e7cb2c7ae487e\""
	Sep 29 12:23:05 dockerenv-230733 containerd[759]: time="2025-09-29T12:23:05.793881751Z" level=info msg="StartContainer for \"ad1d104946cabf789b9e0cb715c46cbf6636cc68b0b131ce944e7cb2c7ae487e\""
	Sep 29 12:23:05 dockerenv-230733 containerd[759]: time="2025-09-29T12:23:05.914636920Z" level=info msg="StartContainer for \"ad1d104946cabf789b9e0cb715c46cbf6636cc68b0b131ce944e7cb2c7ae487e\" returns successfully"
	Sep 29 12:23:09 dockerenv-230733 containerd[759]: time="2025-09-29T12:23:09.619909612Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	
	
	==> describe nodes <==
	Name:               dockerenv-230733
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=dockerenv-230733
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=dockerenv-230733
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T12_23_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 12:22:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  dockerenv-230733
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 12:23:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 12:23:09 +0000   Mon, 29 Sep 2025 12:22:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 12:23:09 +0000   Mon, 29 Sep 2025 12:22:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 12:23:09 +0000   Mon, 29 Sep 2025 12:22:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 12:23:09 +0000   Mon, 29 Sep 2025 12:22:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    dockerenv-230733
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 04ac7f0dcfc64bc89122042ba46c8e6a
	  System UUID:                0e906b12-24ac-4077-8192-7fb19f0625db
	  Boot ID:                    c950b162-3ea4-4410-8c2e-1238f18b29b9
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-n625v                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11s
	  kube-system                 etcd-dockerenv-230733                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         17s
	  kube-system                 kindnet-5qdnx                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11s
	  kube-system                 kube-apiserver-dockerenv-230733             250m (3%)     0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 kube-controller-manager-dockerenv-230733    200m (2%)     0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 kube-proxy-9jcqr                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 kube-scheduler-dockerenv-230733             100m (1%)     0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10s                kube-proxy       
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node dockerenv-230733 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node dockerenv-230733 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node dockerenv-230733 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 17s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17s                kubelet          Node dockerenv-230733 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17s                kubelet          Node dockerenv-230733 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17s                kubelet          Node dockerenv-230733 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13s                node-controller  Node dockerenv-230733 event: Registered Node dockerenv-230733 in Controller
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 02 e7 e8 51 10 6b 08 06
	[  +1.517728] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae a5 e4 37 95 62 08 06
	[  +0.115888] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 81 e5 e6 16 48 08 06
	[ +12.890125] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 a3 59 25 5e a0 08 06
	[  +0.000394] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 02 e7 e8 51 10 6b 08 06
	[  +5.179291] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e f5 e3 4f f3 1f 08 06
	[Sep29 12:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e 41 b4 9f 67 06 08 06
	[ +13.445656] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 1e 7c f1 b5 0d 08 06
	[  +0.000381] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5a 81 e5 e6 16 48 08 06
	[  +7.699318] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 ba 46 0d 66 00 08 06
	[  +0.000403] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0e 41 b4 9f 67 06 08 06
	[  +4.637857] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 16 6b 9e 59 3c 08 06
	[  +0.000369] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e f5 e3 4f f3 1f 08 06
	
	
	==> etcd [212a1544b95dfd4d23fc7ea050c88ba58bff823adff2ab036ba58523b38dade0] <==
	{"level":"warn","ts":"2025-09-29T12:22:56.275099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:22:56.281632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:22:56.290458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:22:56.297345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:22:56.304445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:22:56.311442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:22:56.325005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:22:56.331622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:22:56.339009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:22:56.345678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:22:56.354010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:22:56.360468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:22:56.367340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:22:56.373949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:22:56.380740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:22:56.388928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:22:56.395680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:22:56.403732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:22:56.410795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:22:56.417941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:22:56.424860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:22:56.435338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:22:56.441925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:22:56.448427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:22:56.498043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37620","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:23:16 up  5:05,  0 users,  load average: 1.39, 1.86, 2.46
	Linux dockerenv-230733 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [ad1d104946cabf789b9e0cb715c46cbf6636cc68b0b131ce944e7cb2c7ae487e] <==
	I0929 12:23:06.160006       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0929 12:23:06.160295       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0929 12:23:06.160488       1 main.go:148] setting mtu 1500 for CNI 
	I0929 12:23:06.160508       1 main.go:178] kindnetd IP family: "ipv4"
	I0929 12:23:06.160543       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-29T12:23:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0929 12:23:06.360306       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0929 12:23:06.360419       1 controller.go:381] "Waiting for informer caches to sync"
	I0929 12:23:06.360441       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0929 12:23:06.360933       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0929 12:23:06.760840       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0929 12:23:06.760870       1 metrics.go:72] Registering metrics
	I0929 12:23:06.858495       1 controller.go:711] "Syncing nftables rules"
	I0929 12:23:16.362103       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:23:16.362147       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0f2304e212803d48fc789e13fa16dc16c548aacdce878325e9c00a75f331415a] <==
	I0929 12:22:57.003351       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0929 12:22:57.003820       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0929 12:22:57.004588       1 controller.go:667] quota admission added evaluator for: namespaces
	I0929 12:22:57.009780       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I0929 12:22:57.015712       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I0929 12:22:57.030825       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 12:22:57.046718       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0929 12:22:57.204565       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0929 12:22:57.907756       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0929 12:22:57.914171       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0929 12:22:57.914194       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0929 12:22:58.377140       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0929 12:22:58.412167       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0929 12:22:58.511838       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0929 12:22:58.518050       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0929 12:22:58.519365       1 controller.go:667] quota admission added evaluator for: endpoints
	I0929 12:22:58.524703       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0929 12:22:59.022695       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0929 12:22:59.341482       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0929 12:22:59.354314       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0929 12:22:59.361755       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0929 12:23:04.728235       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 12:23:04.733248       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 12:23:05.074342       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0929 12:23:05.125191       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [a6b8668c39546baf4f25d322cd9e313e33e41b70d86993dc8dd2955b1db6d1a0] <==
	I0929 12:23:04.006799       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 12:23:04.006821       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 12:23:04.006830       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 12:23:04.020438       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0929 12:23:04.020562       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 12:23:04.021581       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0929 12:23:04.021603       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0929 12:23:04.021627       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0929 12:23:04.021637       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 12:23:04.021669       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 12:23:04.021674       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0929 12:23:04.021775       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 12:23:04.021787       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0929 12:23:04.021864       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0929 12:23:04.022033       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 12:23:04.023004       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 12:23:04.023034       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 12:23:04.023106       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0929 12:23:04.023137       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0929 12:23:04.023111       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 12:23:04.023158       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 12:23:04.024246       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 12:23:04.027568       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:23:04.038065       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 12:23:04.044110       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	
	
	==> kube-proxy [57860c205f9190face9cf7183d878cc0936bc156dfe8288af8be0756fa78e6b6] <==
	I0929 12:23:05.645945       1 server_linux.go:53] "Using iptables proxy"
	I0929 12:23:05.703825       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 12:23:05.804035       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 12:23:05.804074       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 12:23:05.804202       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 12:23:05.828921       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 12:23:05.828992       1 server_linux.go:132] "Using iptables Proxier"
	I0929 12:23:05.833881       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 12:23:05.834295       1 server.go:527] "Version info" version="v1.34.0"
	I0929 12:23:05.834330       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:23:05.835590       1 config.go:106] "Starting endpoint slice config controller"
	I0929 12:23:05.835627       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 12:23:05.835678       1 config.go:200] "Starting service config controller"
	I0929 12:23:05.835684       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 12:23:05.835695       1 config.go:309] "Starting node config controller"
	I0929 12:23:05.835708       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 12:23:05.835716       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 12:23:05.835725       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 12:23:05.835731       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 12:23:05.935760       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 12:23:05.935808       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 12:23:05.935832       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [e6f6f5148dc095031ae5cd99f9e34989e438be96b9689af9bb098f7699f1c252] <==
	E0929 12:22:56.966668       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 12:22:56.966757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 12:22:56.966855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 12:22:56.966887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 12:22:56.967945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 12:22:56.968887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 12:22:56.969098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 12:22:56.969194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 12:22:56.969273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 12:22:56.969356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 12:22:56.969457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 12:22:56.969548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 12:22:56.969553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 12:22:56.969608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 12:22:56.969674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 12:22:56.969754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 12:22:57.835160       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 12:22:57.851407       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 12:22:57.855552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 12:22:57.988626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 12:22:58.004914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 12:22:58.037141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 12:22:58.131518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 12:22:58.151736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0929 12:23:00.563690       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 12:23:00 dockerenv-230733 kubelet[1534]: I0929 12:23:00.222551    1534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-dockerenv-230733" podStartSLOduration=1.222530439 podStartE2EDuration="1.222530439s" podCreationTimestamp="2025-09-29 12:22:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 12:23:00.222530374 +0000 UTC m=+1.158797064" watchObservedRunningTime="2025-09-29 12:23:00.222530439 +0000 UTC m=+1.158797123"
	Sep 29 12:23:04 dockerenv-230733 kubelet[1534]: I0929 12:23:04.268587    1534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqdk9\" (UniqueName: \"kubernetes.io/projected/f94b79a4-7245-46ba-bfd1-0aaa2d1710e1-kube-api-access-zqdk9\") pod \"storage-provisioner\" (UID: \"f94b79a4-7245-46ba-bfd1-0aaa2d1710e1\") " pod="kube-system/storage-provisioner"
	Sep 29 12:23:04 dockerenv-230733 kubelet[1534]: I0929 12:23:04.268649    1534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f94b79a4-7245-46ba-bfd1-0aaa2d1710e1-tmp\") pod \"storage-provisioner\" (UID: \"f94b79a4-7245-46ba-bfd1-0aaa2d1710e1\") " pod="kube-system/storage-provisioner"
	Sep 29 12:23:04 dockerenv-230733 kubelet[1534]: E0929 12:23:04.375163    1534 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 29 12:23:04 dockerenv-230733 kubelet[1534]: E0929 12:23:04.375203    1534 projected.go:196] Error preparing data for projected volume kube-api-access-zqdk9 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 29 12:23:04 dockerenv-230733 kubelet[1534]: E0929 12:23:04.375303    1534 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f94b79a4-7245-46ba-bfd1-0aaa2d1710e1-kube-api-access-zqdk9 podName:f94b79a4-7245-46ba-bfd1-0aaa2d1710e1 nodeName:}" failed. No retries permitted until 2025-09-29 12:23:04.875270692 +0000 UTC m=+5.811537375 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zqdk9" (UniqueName: "kubernetes.io/projected/f94b79a4-7245-46ba-bfd1-0aaa2d1710e1-kube-api-access-zqdk9") pod "storage-provisioner" (UID: "f94b79a4-7245-46ba-bfd1-0aaa2d1710e1") : configmap "kube-root-ca.crt" not found
	Sep 29 12:23:05 dockerenv-230733 kubelet[1534]: I0929 12:23:05.174118    1534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a6834b6-692f-402a-8c80-fad81170db86-xtables-lock\") pod \"kube-proxy-9jcqr\" (UID: \"7a6834b6-692f-402a-8c80-fad81170db86\") " pod="kube-system/kube-proxy-9jcqr"
	Sep 29 12:23:05 dockerenv-230733 kubelet[1534]: I0929 12:23:05.174216    1534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a7d4bee5-e0cb-4861-a3d3-423e8a3dc9c5-cni-cfg\") pod \"kindnet-5qdnx\" (UID: \"a7d4bee5-e0cb-4861-a3d3-423e8a3dc9c5\") " pod="kube-system/kindnet-5qdnx"
	Sep 29 12:23:05 dockerenv-230733 kubelet[1534]: I0929 12:23:05.174303    1534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7a6834b6-692f-402a-8c80-fad81170db86-kube-proxy\") pod \"kube-proxy-9jcqr\" (UID: \"7a6834b6-692f-402a-8c80-fad81170db86\") " pod="kube-system/kube-proxy-9jcqr"
	Sep 29 12:23:05 dockerenv-230733 kubelet[1534]: I0929 12:23:05.174337    1534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a6834b6-692f-402a-8c80-fad81170db86-lib-modules\") pod \"kube-proxy-9jcqr\" (UID: \"7a6834b6-692f-402a-8c80-fad81170db86\") " pod="kube-system/kube-proxy-9jcqr"
	Sep 29 12:23:05 dockerenv-230733 kubelet[1534]: I0929 12:23:05.174379    1534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7d4bee5-e0cb-4861-a3d3-423e8a3dc9c5-xtables-lock\") pod \"kindnet-5qdnx\" (UID: \"a7d4bee5-e0cb-4861-a3d3-423e8a3dc9c5\") " pod="kube-system/kindnet-5qdnx"
	Sep 29 12:23:05 dockerenv-230733 kubelet[1534]: I0929 12:23:05.174407    1534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7d4bee5-e0cb-4861-a3d3-423e8a3dc9c5-lib-modules\") pod \"kindnet-5qdnx\" (UID: \"a7d4bee5-e0cb-4861-a3d3-423e8a3dc9c5\") " pod="kube-system/kindnet-5qdnx"
	Sep 29 12:23:05 dockerenv-230733 kubelet[1534]: I0929 12:23:05.174443    1534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prhmn\" (UniqueName: \"kubernetes.io/projected/a7d4bee5-e0cb-4861-a3d3-423e8a3dc9c5-kube-api-access-prhmn\") pod \"kindnet-5qdnx\" (UID: \"a7d4bee5-e0cb-4861-a3d3-423e8a3dc9c5\") " pod="kube-system/kindnet-5qdnx"
	Sep 29 12:23:05 dockerenv-230733 kubelet[1534]: I0929 12:23:05.174936    1534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsbp7\" (UniqueName: \"kubernetes.io/projected/7a6834b6-692f-402a-8c80-fad81170db86-kube-api-access-bsbp7\") pod \"kube-proxy-9jcqr\" (UID: \"7a6834b6-692f-402a-8c80-fad81170db86\") " pod="kube-system/kube-proxy-9jcqr"
	Sep 29 12:23:05 dockerenv-230733 kubelet[1534]: I0929 12:23:05.276032    1534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c159786-cc99-400f-ad63-6c033da90abc-config-volume\") pod \"coredns-66bc5c9577-n625v\" (UID: \"1c159786-cc99-400f-ad63-6c033da90abc\") " pod="kube-system/coredns-66bc5c9577-n625v"
	Sep 29 12:23:05 dockerenv-230733 kubelet[1534]: I0929 12:23:05.276075    1534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kht9g\" (UniqueName: \"kubernetes.io/projected/1c159786-cc99-400f-ad63-6c033da90abc-kube-api-access-kht9g\") pod \"coredns-66bc5c9577-n625v\" (UID: \"1c159786-cc99-400f-ad63-6c033da90abc\") " pod="kube-system/coredns-66bc5c9577-n625v"
	Sep 29 12:23:05 dockerenv-230733 kubelet[1534]: E0929 12:23:05.573584    1534 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22322ab230569bf6582d26b7f534aadd52cf5d978e0efca32e2d45ddc995b96b\": failed to find network info for sandbox \"22322ab230569bf6582d26b7f534aadd52cf5d978e0efca32e2d45ddc995b96b\""
	Sep 29 12:23:05 dockerenv-230733 kubelet[1534]: E0929 12:23:05.573712    1534 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22322ab230569bf6582d26b7f534aadd52cf5d978e0efca32e2d45ddc995b96b\": failed to find network info for sandbox \"22322ab230569bf6582d26b7f534aadd52cf5d978e0efca32e2d45ddc995b96b\"" pod="kube-system/coredns-66bc5c9577-n625v"
	Sep 29 12:23:05 dockerenv-230733 kubelet[1534]: E0929 12:23:05.573743    1534 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22322ab230569bf6582d26b7f534aadd52cf5d978e0efca32e2d45ddc995b96b\": failed to find network info for sandbox \"22322ab230569bf6582d26b7f534aadd52cf5d978e0efca32e2d45ddc995b96b\"" pod="kube-system/coredns-66bc5c9577-n625v"
	Sep 29 12:23:05 dockerenv-230733 kubelet[1534]: E0929 12:23:05.573849    1534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-n625v_kube-system(1c159786-cc99-400f-ad63-6c033da90abc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-n625v_kube-system(1c159786-cc99-400f-ad63-6c033da90abc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22322ab230569bf6582d26b7f534aadd52cf5d978e0efca32e2d45ddc995b96b\\\": failed to find network info for sandbox \\\"22322ab230569bf6582d26b7f534aadd52cf5d978e0efca32e2d45ddc995b96b\\\"\"" pod="kube-system/coredns-66bc5c9577-n625v" podUID="1c159786-cc99-400f-ad63-6c033da90abc"
	Sep 29 12:23:06 dockerenv-230733 kubelet[1534]: I0929 12:23:06.199255    1534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=6.199233052 podStartE2EDuration="6.199233052s" podCreationTimestamp="2025-09-29 12:23:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 12:23:06.19879098 +0000 UTC m=+7.135057651" watchObservedRunningTime="2025-09-29 12:23:06.199233052 +0000 UTC m=+7.135499741"
	Sep 29 12:23:06 dockerenv-230733 kubelet[1534]: I0929 12:23:06.208528    1534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9jcqr" podStartSLOduration=1.208507528 podStartE2EDuration="1.208507528s" podCreationTimestamp="2025-09-29 12:23:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 12:23:06.208256584 +0000 UTC m=+7.144523274" watchObservedRunningTime="2025-09-29 12:23:06.208507528 +0000 UTC m=+7.144774213"
	Sep 29 12:23:06 dockerenv-230733 kubelet[1534]: I0929 12:23:06.219385    1534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-5qdnx" podStartSLOduration=1.219365342 podStartE2EDuration="1.219365342s" podCreationTimestamp="2025-09-29 12:23:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 12:23:06.219177025 +0000 UTC m=+7.155443714" watchObservedRunningTime="2025-09-29 12:23:06.219365342 +0000 UTC m=+7.155632031"
	Sep 29 12:23:09 dockerenv-230733 kubelet[1534]: I0929 12:23:09.619219    1534 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 29 12:23:09 dockerenv-230733 kubelet[1534]: I0929 12:23:09.620217    1534 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	
	
	==> storage-provisioner [c84c28f31a71ec98fac1a792f3163c0f3fe59a78d4b5383106a7da0a115fa313] <==
	I0929 12:23:05.265569       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p dockerenv-230733 -n dockerenv-230733
helpers_test.go:269: (dbg) Run:  kubectl --context dockerenv-230733 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-n625v
helpers_test.go:282: ======> post-mortem[TestDockerEnvContainerd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context dockerenv-230733 describe pod coredns-66bc5c9577-n625v
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context dockerenv-230733 describe pod coredns-66bc5c9577-n625v: exit status 1 (68.144721ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-n625v" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context dockerenv-230733 describe pod coredns-66bc5c9577-n625v: exit status 1
helpers_test.go:175: Cleaning up "dockerenv-230733" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-230733
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-230733: (2.31259157s)
--- FAIL: TestDockerEnvContainerd (43.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-782022 --alsologtostderr -v=1]
E0929 12:35:20.683464 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-782022 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-782022 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-782022 --alsologtostderr -v=1] stderr:
I0929 12:32:01.768745 1152366 out.go:360] Setting OutFile to fd 1 ...
I0929 12:32:01.768840 1152366 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 12:32:01.768848 1152366 out.go:374] Setting ErrFile to fd 2...
I0929 12:32:01.768851 1152366 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 12:32:01.769091 1152366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1097891/.minikube/bin
I0929 12:32:01.769378 1152366 mustload.go:65] Loading cluster: functional-782022
I0929 12:32:01.769724 1152366 config.go:182] Loaded profile config "functional-782022": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0929 12:32:01.770151 1152366 cli_runner.go:164] Run: docker container inspect functional-782022 --format={{.State.Status}}
I0929 12:32:01.787431 1152366 host.go:66] Checking if "functional-782022" exists ...
I0929 12:32:01.787651 1152366 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0929 12:32:01.842011 1152366 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-29 12:32:01.832004495 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0929 12:32:01.842148 1152366 api_server.go:166] Checking apiserver status ...
I0929 12:32:01.842207 1152366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0929 12:32:01.842243 1152366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-782022
I0929 12:32:01.859479 1152366 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33276 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/functional-782022/id_rsa Username:docker}
I0929 12:32:01.960496 1152366 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5072/cgroup
W0929 12:32:01.970946 1152366 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5072/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0929 12:32:01.971029 1152366 ssh_runner.go:195] Run: ls
I0929 12:32:01.974897 1152366 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0929 12:32:01.980081 1152366 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W0929 12:32:01.980131 1152366 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I0929 12:32:01.980292 1152366 config.go:182] Loaded profile config "functional-782022": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0929 12:32:01.980308 1152366 addons.go:69] Setting dashboard=true in profile "functional-782022"
I0929 12:32:01.980316 1152366 addons.go:238] Setting addon dashboard=true in "functional-782022"
I0929 12:32:01.980347 1152366 host.go:66] Checking if "functional-782022" exists ...
I0929 12:32:01.980646 1152366 cli_runner.go:164] Run: docker container inspect functional-782022 --format={{.State.Status}}
I0929 12:32:01.999872 1152366 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0929 12:32:02.001099 1152366 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0929 12:32:02.002086 1152366 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0929 12:32:02.002109 1152366 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0929 12:32:02.002186 1152366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-782022
I0929 12:32:02.019768 1152366 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33276 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/functional-782022/id_rsa Username:docker}
I0929 12:32:02.130067 1152366 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0929 12:32:02.130092 1152366 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0929 12:32:02.149246 1152366 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0929 12:32:02.149282 1152366 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0929 12:32:02.168814 1152366 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0929 12:32:02.168845 1152366 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0929 12:32:02.188430 1152366 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0929 12:32:02.188453 1152366 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0929 12:32:02.208496 1152366 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0929 12:32:02.208538 1152366 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0929 12:32:02.227833 1152366 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0929 12:32:02.227872 1152366 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0929 12:32:02.247890 1152366 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0929 12:32:02.247915 1152366 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0929 12:32:02.267174 1152366 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0929 12:32:02.267200 1152366 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0929 12:32:02.285951 1152366 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0929 12:32:02.285990 1152366 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0929 12:32:02.304563 1152366 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0929 12:32:02.721537 1152366 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-782022 addons enable metrics-server

                                                
                                                
I0929 12:32:02.722433 1152366 addons.go:201] Writing out "functional-782022" config to set dashboard=true...
W0929 12:32:02.722636 1152366 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I0929 12:32:02.723302 1152366 kapi.go:59] client config for functional-782022: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.crt", KeyFile:"/home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.key", CAFile:"/home/jenkins/minikube-integration/21652-1097891/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f41c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0929 12:32:02.723850 1152366 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0929 12:32:02.723865 1152366 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0929 12:32:02.723869 1152366 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0929 12:32:02.723873 1152366 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0929 12:32:02.723876 1152366 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0929 12:32:02.730857 1152366 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  76270387-458c-4a57-b237-9ce6f9316cef 1278 0 2025-09-29 12:32:02 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-09-29 12:32:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.98.83.159,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.98.83.159],IPFamilies:[IPv4],AllocateLoadBalancerN
odePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0929 12:32:02.731034 1152366 out.go:285] * Launching proxy ...
* Launching proxy ...
I0929 12:32:02.731098 1152366 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-782022 proxy --port 36195]
I0929 12:32:02.731340 1152366 dashboard.go:157] Waiting for kubectl to output host:port ...
I0929 12:32:02.774122 1152366 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0929 12:32:02.774186 1152366 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I0929 12:32:02.781931 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[373a4cbb-114a-4f93-8c77-5ccfa7410f7f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:32:02 GMT]] Body:0xc000489100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b6640 TLS:<nil>}
I0929 12:32:02.782024 1152366 retry.go:31] will retry after 87.125µs: Temporary Error: unexpected response code: 503
I0929 12:32:02.785212 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[216a2320-0672-47f9-b8a8-161a37345ab9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:32:02 GMT]] Body:0xc00055a900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000143900 TLS:<nil>}
I0929 12:32:02.785267 1152366 retry.go:31] will retry after 176.617µs: Temporary Error: unexpected response code: 503
I0929 12:32:02.788207 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bf12401a-9407-45d8-83e4-900d3dd8a935] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:32:02 GMT]] Body:0xc00055aa00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b68c0 TLS:<nil>}
I0929 12:32:02.788256 1152366 retry.go:31] will retry after 184.055µs: Temporary Error: unexpected response code: 503
I0929 12:32:02.791232 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d59e69df-0d17-4426-a64f-419a282f2020] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:32:02 GMT]] Body:0xc0008a1c00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b6c80 TLS:<nil>}
I0929 12:32:02.791279 1152366 retry.go:31] will retry after 251.978µs: Temporary Error: unexpected response code: 503
I0929 12:32:02.794085 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0967a278-22d5-4793-aea4-7fe18604343e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:32:02 GMT]] Body:0xc000489240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207900 TLS:<nil>}
I0929 12:32:02.794126 1152366 retry.go:31] will retry after 454.879µs: Temporary Error: unexpected response code: 503
I0929 12:32:02.797052 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1b435ccd-2051-48b9-9e63-fb7ed0cf5c25] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:32:02 GMT]] Body:0xc00055ab40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000143a40 TLS:<nil>}
I0929 12:32:02.797097 1152366 retry.go:31] will retry after 712.036µs: Temporary Error: unexpected response code: 503
I0929 12:32:02.799893 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[38ebfd08-42f4-4fc2-b3d9-d2887b582901] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:32:02 GMT]] Body:0xc000489340 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b6dc0 TLS:<nil>}
I0929 12:32:02.799936 1152366 retry.go:31] will retry after 1.211143ms: Temporary Error: unexpected response code: 503
I0929 12:32:02.804007 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7b43bd92-d642-4276-96d3-33770aec4ea8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:32:02 GMT]] Body:0xc0008a1d00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000143b80 TLS:<nil>}
I0929 12:32:02.804060 1152366 retry.go:31] will retry after 2.029222ms: Temporary Error: unexpected response code: 503
I0929 12:32:02.807955 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2aaaa402-bc3d-40e1-8bd1-7f8e44e4aab3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:32:02 GMT]] Body:0xc0008a1d80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000143cc0 TLS:<nil>}
I0929 12:32:02.808004 1152366 retry.go:31] will retry after 3.006973ms: Temporary Error: unexpected response code: 503
I0929 12:32:02.812932 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ae70dbef-dea3-4248-9b8e-125c9169ac17] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:32:02 GMT]] Body:0xc0008a1e40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207b80 TLS:<nil>}
I0929 12:32:02.813001 1152366 retry.go:31] will retry after 2.401422ms: Temporary Error: unexpected response code: 503
I0929 12:32:02.817897 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[12412743-2fe8-441c-9bfb-c80aef3f6ee4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:32:02 GMT]] Body:0xc00055ac40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207cc0 TLS:<nil>}
I0929 12:32:02.817935 1152366 retry.go:31] will retry after 5.385231ms: Temporary Error: unexpected response code: 503
I0929 12:32:02.825876 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ac452fc9-10dc-4b6f-bc66-efb80d5d602a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:32:02 GMT]] Body:0xc0008a1f40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b6f00 TLS:<nil>}
I0929 12:32:02.825913 1152366 retry.go:31] will retry after 8.884832ms: Temporary Error: unexpected response code: 503
I0929 12:32:02.837935 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[079502ef-8cc4-4748-abe0-48d6409fab51] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:32:02 GMT]] Body:0xc000489540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207e00 TLS:<nil>}
I0929 12:32:02.838008 1152366 retry.go:31] will retry after 13.634266ms: Temporary Error: unexpected response code: 503
I0929 12:32:02.854471 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[87407875-cec1-4f11-aec1-9aae6badebe0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:32:02 GMT]] Body:0xc0005a6ec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000143e00 TLS:<nil>}
I0929 12:32:02.854521 1152366 retry.go:31] will retry after 9.988767ms: Temporary Error: unexpected response code: 503
I0929 12:32:02.866666 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[46a1e63b-c327-428a-b7b4-426890d3bb0d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:32:02 GMT]] Body:0xc000489640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00031a000 TLS:<nil>}
I0929 12:32:02.866731 1152366 retry.go:31] will retry after 36.162388ms: Temporary Error: unexpected response code: 503
I0929 12:32:02.906749 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[38bd7d43-54ff-4ba9-8336-219c327776d4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:32:02 GMT]] Body:0xc00055ad00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017a0000 TLS:<nil>}
I0929 12:32:02.906825 1152366 retry.go:31] will retry after 36.098773ms: Temporary Error: unexpected response code: 503
I0929 12:32:02.947042 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1b519e34-a20e-4f8d-b62a-312e9c3c7baf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:32:02 GMT]] Body:0xc0005a72c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b7180 TLS:<nil>}
I0929 12:32:02.947108 1152366 retry.go:31] will retry after 44.050739ms: Temporary Error: unexpected response code: 503
I0929 12:32:02.994410 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2b08a6d3-38aa-4e19-910e-10482da847a2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:32:02 GMT]] Body:0xc00055ae00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00031a140 TLS:<nil>}
I0929 12:32:02.994478 1152366 retry.go:31] will retry after 81.806025ms: Temporary Error: unexpected response code: 503
I0929 12:32:03.080009 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dc3c25a8-41a8-4e3f-84af-fa82df5397ac] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:32:03 GMT]] Body:0xc000489780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b72c0 TLS:<nil>}
I0929 12:32:03.080076 1152366 retry.go:31] will retry after 87.690315ms: Temporary Error: unexpected response code: 503
I0929 12:32:03.171604 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[11e730a9-dfa8-4932-a41e-910ef0ad3234] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:32:03 GMT]] Body:0xc0005a73c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b7400 TLS:<nil>}
I0929 12:32:03.171668 1152366 retry.go:31] will retry after 277.157092ms: Temporary Error: unexpected response code: 503
I0929 12:32:03.452109 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[148f53ae-81c2-4765-9419-f718074e0d88] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:32:03 GMT]] Body:0xc000800040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00031a280 TLS:<nil>}
I0929 12:32:03.452172 1152366 retry.go:31] will retry after 467.385873ms: Temporary Error: unexpected response code: 503
I0929 12:32:03.922776 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2cf8b634-f1ca-4993-b882-88a44c2051ce] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:32:03 GMT]] Body:0xc00055af80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017a0140 TLS:<nil>}
I0929 12:32:03.922851 1152366 retry.go:31] will retry after 340.241826ms: Temporary Error: unexpected response code: 503
I0929 12:32:04.266100 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a78c822a-9147-4284-862a-7b85cdd0a18a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:32:04 GMT]] Body:0xc0005a7500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b7540 TLS:<nil>}
I0929 12:32:04.266180 1152366 retry.go:31] will retry after 997.814514ms: Temporary Error: unexpected response code: 503
I0929 12:32:05.267376 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a15dc094-e9bd-4aab-8bb9-b58203c99f30] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:32:05 GMT]] Body:0xc00055b080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0016c2000 TLS:<nil>}
I0929 12:32:05.267454 1152366 retry.go:31] will retry after 955.073639ms: Temporary Error: unexpected response code: 503
I0929 12:32:06.225849 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[017a4545-396c-458e-a374-bea3f1f419e3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:32:06 GMT]] Body:0xc0005a7680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b7680 TLS:<nil>}
I0929 12:32:06.225914 1152366 retry.go:31] will retry after 966.707697ms: Temporary Error: unexpected response code: 503
I0929 12:32:07.196215 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[563e9fbb-008c-4c93-943b-8752c800b4ba] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:32:07 GMT]] Body:0xc00055b180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0016c2140 TLS:<nil>}
I0929 12:32:07.196291 1152366 retry.go:31] will retry after 1.629035199s: Temporary Error: unexpected response code: 503
I0929 12:32:08.829876 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[45a38819-c2e2-4964-b1a8-08155d82c207] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:32:08 GMT]] Body:0xc000800240 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b77c0 TLS:<nil>}
I0929 12:32:08.829937 1152366 retry.go:31] will retry after 5.224168876s: Temporary Error: unexpected response code: 503
I0929 12:32:14.058531 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[819bd133-48c2-4169-938d-a7019c7d707f] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:32:14 GMT]] Body:0xc0008002c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0016c2280 TLS:<nil>}
I0929 12:32:14.058611 1152366 retry.go:31] will retry after 2.93751284s: Temporary Error: unexpected response code: 503
I0929 12:32:16.999927 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[16958dc0-28be-4974-a0af-95fdf6ae31c4] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:32:16 GMT]] Body:0xc000a7b6c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017a0280 TLS:<nil>}
I0929 12:32:17.000031 1152366 retry.go:31] will retry after 11.276655812s: Temporary Error: unexpected response code: 503
I0929 12:32:28.281217 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f98526ae-5fa0-4b4a-9fe9-3bcd33a6d97d] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:32:28 GMT]] Body:0xc0005a7800 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b7900 TLS:<nil>}
I0929 12:32:28.281287 1152366 retry.go:31] will retry after 16.047901192s: Temporary Error: unexpected response code: 503
I0929 12:32:44.333530 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3b2a1259-bcf5-493b-802d-3b71b322ad06] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:32:44 GMT]] Body:0xc00055b300 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0016c23c0 TLS:<nil>}
I0929 12:32:44.333599 1152366 retry.go:31] will retry after 25.502354262s: Temporary Error: unexpected response code: 503
I0929 12:33:09.840982 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[62662d46-eed5-4ec6-bb49-7ab14a89bcde] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:33:09 GMT]] Body:0xc00055b3c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b7a40 TLS:<nil>}
I0929 12:33:09.841057 1152366 retry.go:31] will retry after 40.096071086s: Temporary Error: unexpected response code: 503
I0929 12:33:49.943314 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[26d6fe2a-ea8b-4dbd-be27-82d7d1feb2bb] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:33:49 GMT]] Body:0xc00055b480 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015c4000 TLS:<nil>}
I0929 12:33:49.943389 1152366 retry.go:31] will retry after 35.316038751s: Temporary Error: unexpected response code: 503
I0929 12:34:25.264590 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a42ed7a4-15b5-4969-9b2e-72561d932a63] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:34:25 GMT]] Body:0xc000a7a080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003c6000 TLS:<nil>}
I0929 12:34:25.264658 1152366 retry.go:31] will retry after 1m8.564373388s: Temporary Error: unexpected response code: 503
I0929 12:35:33.832652 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cd941852-2322-48dd-a279-8b421e801c30] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:35:33 GMT]] Body:0xc000800040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003c6140 TLS:<nil>}
I0929 12:35:33.832745 1152366 retry.go:31] will retry after 54.504800357s: Temporary Error: unexpected response code: 503
I0929 12:36:28.341532 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b78bd3f0-6f29-455c-bedf-759f917cd81a] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:36:28 GMT]] Body:0xc000800040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003c6640 TLS:<nil>}
I0929 12:36:28.341631 1152366 retry.go:31] will retry after 32.178963759s: Temporary Error: unexpected response code: 503
I0929 12:37:00.527293 1152366 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6ecbe97b-c376-4ad5-8c4f-d8556a2d3268] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 12:37:00 GMT]] Body:0xc000a7a0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003c6780 TLS:<nil>}
I0929 12:37:00.527401 1152366 retry.go:31] will retry after 32.669141883s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-782022
helpers_test.go:243: (dbg) docker inspect functional-782022:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1786c46f38527c7de691196a7fdb1ca120239ca565db05edd3bce1d85e01f298",
	        "Created": "2025-09-29T12:23:56.273004679Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1134005,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T12:23:56.310015171Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/1786c46f38527c7de691196a7fdb1ca120239ca565db05edd3bce1d85e01f298/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1786c46f38527c7de691196a7fdb1ca120239ca565db05edd3bce1d85e01f298/hostname",
	        "HostsPath": "/var/lib/docker/containers/1786c46f38527c7de691196a7fdb1ca120239ca565db05edd3bce1d85e01f298/hosts",
	        "LogPath": "/var/lib/docker/containers/1786c46f38527c7de691196a7fdb1ca120239ca565db05edd3bce1d85e01f298/1786c46f38527c7de691196a7fdb1ca120239ca565db05edd3bce1d85e01f298-json.log",
	        "Name": "/functional-782022",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-782022:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-782022",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1786c46f38527c7de691196a7fdb1ca120239ca565db05edd3bce1d85e01f298",
	                "LowerDir": "/var/lib/docker/overlay2/3c5ab5481c7a994cd5d59e2e9db0e3dcde4fc8f67196d7f2e7829042bdd20fba-init/diff:/var/lib/docker/overlay2/fbd0ff8837aea1062458ef3b6c2ff01f7caaf77470820d108a1f7ca188c98aa7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3c5ab5481c7a994cd5d59e2e9db0e3dcde4fc8f67196d7f2e7829042bdd20fba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3c5ab5481c7a994cd5d59e2e9db0e3dcde4fc8f67196d7f2e7829042bdd20fba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3c5ab5481c7a994cd5d59e2e9db0e3dcde4fc8f67196d7f2e7829042bdd20fba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-782022",
	                "Source": "/var/lib/docker/volumes/functional-782022/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-782022",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-782022",
	                "name.minikube.sigs.k8s.io": "functional-782022",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "50c97dcabfb7e5784a6ece9564a867bb96ac9a43766c5bddf69d122258ab8e5a",
	            "SandboxKey": "/var/run/docker/netns/50c97dcabfb7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33276"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33277"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33280"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33278"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33279"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-782022": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:3f:40:a7:34:77",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1188f63361a712974d693e573886a912852ad97f16abca5373c8b53f08ee79f7",
	                    "EndpointID": "aaa66c085157d64022cbc7f79013ed8657f004e3bea603739a48e13aecf19afc",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-782022",
	                        "1786c46f3852"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-782022 -n functional-782022
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-782022 logs -n 25: (1.480627939s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-782022 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3370057904/001:/mount2 --alsologtostderr -v=1 │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │                     │
	│ mount          │ -p functional-782022 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3370057904/001:/mount3 --alsologtostderr -v=1 │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │                     │
	│ ssh            │ functional-782022 ssh findmnt -T /mount1                                                                           │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:32 UTC │ 29 Sep 25 12:32 UTC │
	│ ssh            │ functional-782022 ssh findmnt -T /mount2                                                                           │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:32 UTC │ 29 Sep 25 12:32 UTC │
	│ ssh            │ functional-782022 ssh findmnt -T /mount3                                                                           │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:32 UTC │ 29 Sep 25 12:32 UTC │
	│ mount          │ -p functional-782022 --kill=true                                                                                   │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:32 UTC │                     │
	│ start          │ -p functional-782022 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd    │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:32 UTC │                     │
	│ start          │ -p functional-782022 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd              │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:32 UTC │                     │
	│ start          │ -p functional-782022 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd    │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:32 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-782022 --alsologtostderr -v=1                                                     │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:32 UTC │                     │
	│ service        │ functional-782022 service list                                                                                     │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │ 29 Sep 25 12:35 UTC │
	│ service        │ functional-782022 service list -o json                                                                             │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │ 29 Sep 25 12:35 UTC │
	│ service        │ functional-782022 service --namespace=default --https --url hello-node                                             │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │                     │
	│ service        │ functional-782022 service hello-node --url --format={{.IP}}                                                        │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │                     │
	│ service        │ functional-782022 service hello-node --url                                                                         │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │                     │
	│ image          │ functional-782022 image ls --format short --alsologtostderr                                                        │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │ 29 Sep 25 12:35 UTC │
	│ ssh            │ functional-782022 ssh pgrep buildkitd                                                                              │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │                     │
	│ image          │ functional-782022 image build -t localhost/my-image:functional-782022 testdata/build --alsologtostderr             │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │ 29 Sep 25 12:35 UTC │
	│ image          │ functional-782022 image ls --format yaml --alsologtostderr                                                         │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │ 29 Sep 25 12:35 UTC │
	│ image          │ functional-782022 image ls --format json --alsologtostderr                                                         │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │ 29 Sep 25 12:35 UTC │
	│ image          │ functional-782022 image ls --format table --alsologtostderr                                                        │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │ 29 Sep 25 12:35 UTC │
	│ image          │ functional-782022 image ls                                                                                         │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │ 29 Sep 25 12:35 UTC │
	│ update-context │ functional-782022 update-context --alsologtostderr -v=2                                                            │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │ 29 Sep 25 12:35 UTC │
	│ update-context │ functional-782022 update-context --alsologtostderr -v=2                                                            │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │ 29 Sep 25 12:35 UTC │
	│ update-context │ functional-782022 update-context --alsologtostderr -v=2                                                            │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │ 29 Sep 25 12:35 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 12:32:01
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 12:32:01.623393 1152286 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:32:01.623483 1152286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:32:01.623490 1152286 out.go:374] Setting ErrFile to fd 2...
	I0929 12:32:01.623494 1152286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:32:01.623773 1152286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1097891/.minikube/bin
	I0929 12:32:01.624207 1152286 out.go:368] Setting JSON to false
	I0929 12:32:01.625266 1152286 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":18859,"bootTime":1759130263,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 12:32:01.625351 1152286 start.go:140] virtualization: kvm guest
	I0929 12:32:01.627031 1152286 out.go:179] * [functional-782022] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0929 12:32:01.628165 1152286 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 12:32:01.628190 1152286 notify.go:220] Checking for updates...
	I0929 12:32:01.630582 1152286 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:32:01.631578 1152286 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 12:32:01.632499 1152286 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1097891/.minikube
	I0929 12:32:01.633553 1152286 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 12:32:01.634463 1152286 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 12:32:01.635841 1152286 config.go:182] Loaded profile config "functional-782022": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 12:32:01.636357 1152286 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:32:01.659705 1152286 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 12:32:01.659797 1152286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:32:01.714441 1152286 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-29 12:32:01.703718947 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:32:01.714537 1152286 docker.go:318] overlay module found
	I0929 12:32:01.716020 1152286 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0929 12:32:01.717088 1152286 start.go:304] selected driver: docker
	I0929 12:32:01.717115 1152286 start.go:924] validating driver "docker" against &{Name:functional-782022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-782022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:32:01.717223 1152286 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 12:32:01.718814 1152286 out.go:203] 
	W0929 12:32:01.719743 1152286 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0929 12:32:01.720687 1152286 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7a8e04cbed167       56cc512116c8f       5 minutes ago       Exited              mount-munger              0                   75e91da73587b       busybox-mount
	897d454b93764       6e38f40d628db       11 minutes ago      Running             storage-provisioner       3                   81f3483cb1bf7       storage-provisioner
	bcc5e572f0ecc       90550c43ad2bc       11 minutes ago      Running             kube-apiserver            0                   1eff7df7d085a       kube-apiserver-functional-782022
	f45ad0c405faa       46169d968e920       11 minutes ago      Running             kube-scheduler            1                   1ab56f1d7c59d       kube-scheduler-functional-782022
	f2c5ba8dccddf       5f1f5298c888d       11 minutes ago      Running             etcd                      1                   684c3b624bc22       etcd-functional-782022
	11011c88597fc       a0af72f2ec6d6       11 minutes ago      Running             kube-controller-manager   1                   8d20f6e965d70       kube-controller-manager-functional-782022
	1a1e77b489c91       6e38f40d628db       11 minutes ago      Exited              storage-provisioner       2                   81f3483cb1bf7       storage-provisioner
	0bbcc10ce4fbd       52546a367cc9e       12 minutes ago      Running             coredns                   1                   05c745f61eda2       coredns-66bc5c9577-zm6rn
	7184ba4a9a391       409467f978b4a       12 minutes ago      Running             kindnet-cni               1                   bbcc18db59f8f       kindnet-gk4hp
	7def58db713d2       df0860106674d       12 minutes ago      Running             kube-proxy                1                   560b48c7f2dd0       kube-proxy-dnlcd
	43db49e8ed43f       52546a367cc9e       12 minutes ago      Exited              coredns                   0                   05c745f61eda2       coredns-66bc5c9577-zm6rn
	4a8d7cb63e108       409467f978b4a       12 minutes ago      Exited              kindnet-cni               0                   bbcc18db59f8f       kindnet-gk4hp
	f8040ae956a29       df0860106674d       12 minutes ago      Exited              kube-proxy                0                   560b48c7f2dd0       kube-proxy-dnlcd
	b0de26d17b60c       5f1f5298c888d       12 minutes ago      Exited              etcd                      0                   684c3b624bc22       etcd-functional-782022
	c72893cda0718       a0af72f2ec6d6       12 minutes ago      Exited              kube-controller-manager   0                   8d20f6e965d70       kube-controller-manager-functional-782022
	a87bec1b3ee14       46169d968e920       12 minutes ago      Exited              kube-scheduler            0                   1ab56f1d7c59d       kube-scheduler-functional-782022
	
	
	==> containerd <==
	Sep 29 12:35:09 functional-782022 containerd[3890]: time="2025-09-29T12:35:09.115075869Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:35:09 functional-782022 containerd[3890]: time="2025-09-29T12:35:09.115159590Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	Sep 29 12:35:09 functional-782022 containerd[3890]: time="2025-09-29T12:35:09.115938966Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 29 12:35:09 functional-782022 containerd[3890]: time="2025-09-29T12:35:09.117266601Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:35:09 functional-782022 containerd[3890]: time="2025-09-29T12:35:09.770483915Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:35:11 functional-782022 containerd[3890]: time="2025-09-29T12:35:11.623297784Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:35:11 functional-782022 containerd[3890]: time="2025-09-29T12:35:11.623366154Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11014"
	Sep 29 12:35:48 functional-782022 containerd[3890]: time="2025-09-29T12:35:48.203802148Z" level=info msg="shim disconnected" id=hvf4r6frtn4qbyl2d4ntu6qiq namespace=k8s.io
	Sep 29 12:35:48 functional-782022 containerd[3890]: time="2025-09-29T12:35:48.203985899Z" level=warning msg="cleaning up after shim disconnected" id=hvf4r6frtn4qbyl2d4ntu6qiq namespace=k8s.io
	Sep 29 12:35:48 functional-782022 containerd[3890]: time="2025-09-29T12:35:48.204009813Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 29 12:35:48 functional-782022 containerd[3890]: time="2025-09-29T12:35:48.323694610Z" level=info msg="ImageCreate event name:\"localhost/my-image:functional-782022\""
	Sep 29 12:35:48 functional-782022 containerd[3890]: time="2025-09-29T12:35:48.327275397Z" level=info msg="ImageCreate event name:\"sha256:6629e2cffa4dc94356da5fba7f94e12f2a5d44ce7e8738beb6f3294013e00ae6\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 29 12:35:48 functional-782022 containerd[3890]: time="2025-09-29T12:35:48.327856672Z" level=info msg="ImageUpdate event name:\"localhost/my-image:functional-782022\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 29 12:36:43 functional-782022 containerd[3890]: time="2025-09-29T12:36:43.581368052Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Sep 29 12:36:43 functional-782022 containerd[3890]: time="2025-09-29T12:36:43.583268777Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:36:44 functional-782022 containerd[3890]: time="2025-09-29T12:36:44.255689630Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:36:46 functional-782022 containerd[3890]: time="2025-09-29T12:36:46.120044772Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:36:46 functional-782022 containerd[3890]: time="2025-09-29T12:36:46.120103082Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=10998"
	Sep 29 12:36:50 functional-782022 containerd[3890]: time="2025-09-29T12:36:50.581887204Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
	Sep 29 12:36:50 functional-782022 containerd[3890]: time="2025-09-29T12:36:50.583780132Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:36:51 functional-782022 containerd[3890]: time="2025-09-29T12:36:51.235726659Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:36:53 functional-782022 containerd[3890]: time="2025-09-29T12:36:53.111672716Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:36:53 functional-782022 containerd[3890]: time="2025-09-29T12:36:53.111719957Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=10967"
	Sep 29 12:37:02 functional-782022 containerd[3890]: time="2025-09-29T12:37:02.582059791Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Sep 29 12:37:02 functional-782022 containerd[3890]: time="2025-09-29T12:37:02.584232334Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	
	
	==> coredns [0bbcc10ce4fbd6447f09bdba14f490c4feeec967a90d14aae73d8da93b645593] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48424 - 35954 "HINFO IN 1036536613831132561.4260401396292498277. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.044239158s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [43db49e8ed43f24eb4141339439623039f1c25d6fa08c6a6973f5121b66d3b14] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35447 - 30519 "HINFO IN 5997367880324166599.6195756209000097168. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.028452503s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-782022
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-782022
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=functional-782022
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T12_24_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 12:24:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-782022
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 12:36:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 12:36:18 +0000   Mon, 29 Sep 2025 12:24:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 12:36:18 +0000   Mon, 29 Sep 2025 12:24:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 12:36:18 +0000   Mon, 29 Sep 2025 12:24:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 12:36:18 +0000   Mon, 29 Sep 2025 12:24:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-782022
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 4e290b74a50b4c5797f445ba16a9585a
	  System UUID:                f8a45455-22cc-4de2-93e1-996efdb799ef
	  Boot ID:                    c950b162-3ea4-4410-8c2e-1238f18b29b9
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-gv7jt                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-node-connect-7d85dfc575-9rv7l           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     mysql-5bb876957f-2z7m2                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     5m41s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-zm6rn                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-782022                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-gk4hp                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-782022              250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-functional-782022     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-dnlcd                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-782022              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-lgh7n    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8bxd5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-782022 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-782022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-782022 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           12m                node-controller  Node functional-782022 event: Registered Node functional-782022 in Controller
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-782022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-782022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-782022 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-782022 event: Registered Node functional-782022 in Controller
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 02 e7 e8 51 10 6b 08 06
	[  +1.517728] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae a5 e4 37 95 62 08 06
	[  +0.115888] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 81 e5 e6 16 48 08 06
	[ +12.890125] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 a3 59 25 5e a0 08 06
	[  +0.000394] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 02 e7 e8 51 10 6b 08 06
	[  +5.179291] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e f5 e3 4f f3 1f 08 06
	[Sep29 12:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e 41 b4 9f 67 06 08 06
	[ +13.445656] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 1e 7c f1 b5 0d 08 06
	[  +0.000381] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5a 81 e5 e6 16 48 08 06
	[  +7.699318] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 ba 46 0d 66 00 08 06
	[  +0.000403] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0e 41 b4 9f 67 06 08 06
	[  +4.637857] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 16 6b 9e 59 3c 08 06
	[  +0.000369] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e f5 e3 4f f3 1f 08 06
	
	
	==> etcd [b0de26d17b60cee3ea0ffbb0def9bf68b3b7b8fec1a615de7e7e3b1ffdbf44d3] <==
	{"level":"warn","ts":"2025-09-29T12:24:09.508315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:24:09.514616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:24:09.520742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:24:09.528061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:24:09.539044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:24:09.545262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:24:09.552397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55366","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T12:25:10.869561Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T12:25:10.869658Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-782022","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-29T12:25:10.869764Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T12:25:10.871378Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T12:25:10.871458Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:25:10.871477Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-29T12:25:10.871570Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-29T12:25:10.871571Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:25:10.871554Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-09-29T12:25:10.871603Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-09-29T12:25:10.871602Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:25:10.871615Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T12:25:10.871618Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-09-29T12:25:10.871626Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:25:10.873625Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-29T12:25:10.873686Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:25:10.873718Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-29T12:25:10.873728Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-782022","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [f2c5ba8dccddf943d6966d386ff390e8a8543cb1ecced1f03cc46a777ec12f12] <==
	{"level":"warn","ts":"2025-09-29T12:25:13.996935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.005286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.011672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.017730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.029057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.034829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.042108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.048270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.054551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.060723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.067227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.073406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.081074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.088521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.095420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.102293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.109042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.126214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.129431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.136215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.142728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.187334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42494","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T12:35:13.684888Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1072}
	{"level":"info","ts":"2025-09-29T12:35:13.705722Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1072,"took":"20.470272ms","hash":1187297717,"current-db-size-bytes":3743744,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":1847296,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-09-29T12:35:13.705768Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1187297717,"revision":1072,"compact-revision":-1}
	
	
	==> kernel <==
	 12:37:03 up  5:19,  0 users,  load average: 0.31, 0.44, 1.23
	Linux functional-782022 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [4a8d7cb63e108e5508f3caa9c423a89228d2d7dd659718cf9c657cb63b960bd6] <==
	I0929 12:24:19.051272       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0929 12:24:19.051515       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0929 12:24:19.051679       1 main.go:148] setting mtu 1500 for CNI 
	I0929 12:24:19.051696       1 main.go:178] kindnetd IP family: "ipv4"
	I0929 12:24:19.051709       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-29T12:24:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0929 12:24:19.254620       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0929 12:24:19.254648       1 controller.go:381] "Waiting for informer caches to sync"
	I0929 12:24:19.254660       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0929 12:24:19.332675       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0929 12:24:19.732111       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0929 12:24:19.732144       1 metrics.go:72] Registering metrics
	I0929 12:24:19.732204       1 controller.go:711] "Syncing nftables rules"
	I0929 12:24:29.255389       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:24:29.255476       1 main.go:301] handling current node
	I0929 12:24:39.258047       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:24:39.258081       1 main.go:301] handling current node
	I0929 12:24:49.264050       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:24:49.264086       1 main.go:301] handling current node
	I0929 12:24:59.256428       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:24:59.256509       1 main.go:301] handling current node
	
	
	==> kindnet [7184ba4a9a391b6fe9af875c4cf7b7ec1e446596766a273e493fd073ac49febe] <==
	I0929 12:35:02.035510       1 main.go:301] handling current node
	I0929 12:35:12.034635       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:35:12.034679       1 main.go:301] handling current node
	I0929 12:35:22.036604       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:35:22.036643       1 main.go:301] handling current node
	I0929 12:35:32.038173       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:35:32.038208       1 main.go:301] handling current node
	I0929 12:35:42.033873       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:35:42.033923       1 main.go:301] handling current node
	I0929 12:35:52.042956       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:35:52.043017       1 main.go:301] handling current node
	I0929 12:36:02.034251       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:36:02.034297       1 main.go:301] handling current node
	I0929 12:36:12.034701       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:36:12.034748       1 main.go:301] handling current node
	I0929 12:36:22.037274       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:36:22.037332       1 main.go:301] handling current node
	I0929 12:36:32.038253       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:36:32.038298       1 main.go:301] handling current node
	I0929 12:36:42.034103       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:36:42.034146       1 main.go:301] handling current node
	I0929 12:36:52.034304       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:36:52.034342       1 main.go:301] handling current node
	I0929 12:37:02.033895       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:37:02.033957       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bcc5e572f0ecc825bbe90fc103944f2ce30f09993a6a7567dc1ecef379c4c13c] <==
	I0929 12:25:41.209278       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.99.154.94"}
	I0929 12:25:41.675258       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.109.36.175"}
	I0929 12:26:14.935892       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:26:16.129046       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:27:40.912383       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:27:42.979991       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:28:41.057645       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:29:07.263713       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:29:44.231603       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:30:07.893764       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:30:58.445280       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:31:17.663939       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:31:22.285425       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.103.89.153"}
	I0929 12:32:02.602777       1 controller.go:667] quota admission added evaluator for: namespaces
	I0929 12:32:02.704384       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.83.159"}
	I0929 12:32:02.714442       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.200.223"}
	I0929 12:32:13.424480       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:32:26.436667       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:33:18.083588       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:33:41.415637       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:34:38.700855       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:35:03.361330       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:35:14.619132       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 12:35:52.430038       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:36:21.670939       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [11011c88597fc968304178a065fd8bcd3e220e2bbf17e0361a296ac56203173b] <==
	I0929 12:25:18.003261       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:25:18.008602       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 12:25:18.010845       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0929 12:25:18.015307       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 12:25:18.015736       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0929 12:25:18.015770       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0929 12:25:18.016951       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 12:25:18.016999       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0929 12:25:18.017008       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 12:25:18.017061       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0929 12:25:18.017070       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 12:25:18.017107       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 12:25:18.017114       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 12:25:18.017207       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0929 12:25:18.018697       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0929 12:25:18.021924       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 12:25:18.022150       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:25:18.024225       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0929 12:25:18.038487       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0929 12:32:02.648029       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 12:32:02.652165       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 12:32:02.652282       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 12:32:02.655219       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 12:32:02.656606       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 12:32:02.660668       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [c72893cda07183eef7eadd6ed0b844f67fdb9a7c8349578dc21bde2e7c064d97] <==
	I0929 12:24:17.256686       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 12:24:17.256794       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 12:24:17.256803       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0929 12:24:17.257121       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0929 12:24:17.257121       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0929 12:24:17.257152       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 12:24:17.257295       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 12:24:17.257472       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 12:24:17.257487       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0929 12:24:17.259069       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 12:24:17.259091       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 12:24:17.259216       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 12:24:17.259310       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-782022"
	I0929 12:24:17.259352       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 12:24:17.260382       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 12:24:17.260468       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0929 12:24:17.260617       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0929 12:24:17.260680       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0929 12:24:17.260695       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0929 12:24:17.260702       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0929 12:24:17.260737       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:24:17.263845       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:24:17.267580       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0929 12:24:17.271688       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-782022" podCIDRs=["10.244.0.0/24"]
	I0929 12:24:17.278010       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [7def58db713d2ef060792b4dec8bbc81106fc7d4ec2b7322f8bfd3474f0c638f] <==
	I0929 12:25:01.693066       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0929 12:25:01.694237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-782022&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 12:25:02.740797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-782022&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 12:25:04.709796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-782022&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 12:25:09.735699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-782022&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0929 12:25:20.593272       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 12:25:20.593323       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 12:25:20.593446       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 12:25:20.616514       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 12:25:20.616586       1 server_linux.go:132] "Using iptables Proxier"
	I0929 12:25:20.622267       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 12:25:20.622658       1 server.go:527] "Version info" version="v1.34.0"
	I0929 12:25:20.622676       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:25:20.624180       1 config.go:200] "Starting service config controller"
	I0929 12:25:20.624202       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 12:25:20.624207       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 12:25:20.624222       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 12:25:20.624247       1 config.go:106] "Starting endpoint slice config controller"
	I0929 12:25:20.624252       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 12:25:20.624279       1 config.go:309] "Starting node config controller"
	I0929 12:25:20.624291       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 12:25:20.725264       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 12:25:20.725297       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 12:25:20.725368       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 12:25:20.725390       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [f8040ae956a29db65d75560d0d961054428ad3355739826fdfa6c4689553ce6c] <==
	I0929 12:24:18.577732       1 server_linux.go:53] "Using iptables proxy"
	I0929 12:24:18.633856       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 12:24:18.734060       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 12:24:18.734113       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 12:24:18.734257       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 12:24:18.850654       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 12:24:18.850737       1 server_linux.go:132] "Using iptables Proxier"
	I0929 12:24:18.857213       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 12:24:18.857575       1 server.go:527] "Version info" version="v1.34.0"
	I0929 12:24:18.857597       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:24:18.859119       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 12:24:18.859526       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 12:24:18.859143       1 config.go:200] "Starting service config controller"
	I0929 12:24:18.859205       1 config.go:309] "Starting node config controller"
	I0929 12:24:18.859571       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 12:24:18.859580       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 12:24:18.859600       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 12:24:18.859300       1 config.go:106] "Starting endpoint slice config controller"
	I0929 12:24:18.859730       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 12:24:18.959729       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 12:24:18.959820       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 12:24:18.959861       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a87bec1b3ee14c32cd766e09316d631522aea0fcba6f24d9c0d707e90c6859a0] <==
	E0929 12:24:10.048487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 12:24:10.048503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 12:24:10.048519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 12:24:10.048543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 12:24:10.048540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 12:24:10.048541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 12:24:10.048653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 12:24:10.049080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 12:24:10.049118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 12:24:10.884036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 12:24:10.973647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 12:24:10.995046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 12:24:11.049696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 12:24:11.081784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 12:24:11.196404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 12:24:11.198401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 12:24:11.218661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 12:24:11.362047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0929 12:24:13.945049       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:25:10.988013       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:25:10.988132       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 12:25:10.988193       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 12:25:10.988224       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 12:25:10.988263       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 12:25:10.988295       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f45ad0c405faa9c17629b0b8f69da5b981a87271e7727e52d833a13ae68a780b] <==
	I0929 12:25:14.313310       1 serving.go:386] Generated self-signed cert in-memory
	I0929 12:25:14.639875       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 12:25:14.639900       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:25:14.644894       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0929 12:25:14.644913       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:25:14.644912       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 12:25:14.644932       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0929 12:25:14.644936       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:25:14.644941       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 12:25:14.645395       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 12:25:14.645456       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 12:25:14.745516       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 12:25:14.745524       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:25:14.745526       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Sep 29 12:36:28 functional-782022 kubelet[4825]: E0929 12:36:28.581501    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-lgh7n" podUID="d0347370-124a-4ceb-87db-90f1
0e018aa4"
	Sep 29 12:36:31 functional-782022 kubelet[4825]: E0929 12:36:31.581860    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8bxd5" podUID="5f8be926-b1a2-41b6-aa79-115ced6fb907"
	Sep 29 12:36:32 functional-782022 kubelet[4825]: E0929 12:36:32.581866    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-9rv7l" podUID="ae8a4896-b6e0-4b77-979c-178f02f8aed1"
	Sep 29 12:36:32 functional-782022 kubelet[4825]: E0929 12:36:32.582127    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-gv7jt" podUID="b661658c-5c1b-440c-937c-5f64eae745c1"
	Sep 29 12:36:33 functional-782022 kubelet[4825]: E0929 12:36:33.581484    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-2z7m2" podUID="e3e3d6a7-8d43-48c1-8a0e-c35b42f327b4"
	Sep 29 12:36:36 functional-782022 kubelet[4825]: E0929 12:36:36.580713    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="06f2ff8c-eced-4819-bcca-da8efb85234c"
	Sep 29 12:36:39 functional-782022 kubelet[4825]: E0929 12:36:39.581309    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="6f4f78d3-af0d-455f-a1a8-728cfbe1024e"
	Sep 29 12:36:43 functional-782022 kubelet[4825]: E0929 12:36:43.580937    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-9rv7l" podUID="ae8a4896-b6e0-4b77-979c-178f02f8aed1"
	Sep 29 12:36:43 functional-782022 kubelet[4825]: E0929 12:36:43.581727    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-lgh7n" podUID="d0347370-124a-4ceb-87db-90f1
0e018aa4"
	Sep 29 12:36:45 functional-782022 kubelet[4825]: E0929 12:36:45.581225    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-2z7m2" podUID="e3e3d6a7-8d43-48c1-8a0e-c35b42f327b4"
	Sep 29 12:36:46 functional-782022 kubelet[4825]: E0929 12:36:46.120410    4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Sep 29 12:36:46 functional-782022 kubelet[4825]: E0929 12:36:46.120481    4825 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Sep 29 12:36:46 functional-782022 kubelet[4825]: E0929 12:36:46.120621    4825 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-gv7jt_default(b661658c-5c1b-440c-937c-5f64eae745c1): ErrImagePull: failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 12:36:46 functional-782022 kubelet[4825]: E0929 12:36:46.120656    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-gv7jt" podUID="b661658c-5c1b-440c-937c-5f64eae745c1"
	Sep 29 12:36:46 functional-782022 kubelet[4825]: E0929 12:36:46.581579    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8bxd5" podUID="5f8be926-b1a2-41b6-aa79-115ced6fb907"
	Sep 29 12:36:49 functional-782022 kubelet[4825]: E0929 12:36:49.581091    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="06f2ff8c-eced-4819-bcca-da8efb85234c"
	Sep 29 12:36:53 functional-782022 kubelet[4825]: E0929 12:36:53.112025    4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 29 12:36:53 functional-782022 kubelet[4825]: E0929 12:36:53.112088    4825 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 29 12:36:53 functional-782022 kubelet[4825]: E0929 12:36:53.112209    4825 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx-svc_default(6f4f78d3-af0d-455f-a1a8-728cfbe1024e): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 12:36:53 functional-782022 kubelet[4825]: E0929 12:36:53.112251    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="6f4f78d3-af0d-455f-a1a8-728cfbe1024e"
	Sep 29 12:36:56 functional-782022 kubelet[4825]: E0929 12:36:56.581113    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-9rv7l" podUID="ae8a4896-b6e0-4b77-979c-178f02f8aed1"
	Sep 29 12:36:57 functional-782022 kubelet[4825]: E0929 12:36:57.580934    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-gv7jt" podUID="b661658c-5c1b-440c-937c-5f64eae745c1"
	Sep 29 12:36:58 functional-782022 kubelet[4825]: E0929 12:36:58.581798    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-lgh7n" podUID="d0347370-124a-4ceb-87db-90f1
0e018aa4"
	Sep 29 12:36:59 functional-782022 kubelet[4825]: E0929 12:36:59.581642    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-2z7m2" podUID="e3e3d6a7-8d43-48c1-8a0e-c35b42f327b4"
	Sep 29 12:37:01 functional-782022 kubelet[4825]: E0929 12:37:01.581907    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8bxd5" podUID="5f8be926-b1a2-41b6-aa79-115ced6fb907"
	
	
	==> storage-provisioner [1a1e77b489c91eed8244dea845d1c614d96d31d5eeb78fb695a95f6dcd0cc57d] <==
	I0929 12:25:07.440515       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 12:25:07.443666       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [897d454b937641e694c437680fd88a379717c743494df6b5e5785964165119dd] <==
	W0929 12:36:37.890469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:36:39.894124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:36:39.898050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:36:41.901366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:36:41.905870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:36:43.908793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:36:43.912843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:36:45.916164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:36:45.920034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:36:47.923195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:36:47.927385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:36:49.931081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:36:49.936061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:36:51.939095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:36:51.943704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:36:53.947065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:36:53.951035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:36:55.954379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:36:55.959211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:36:57.962492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:36:57.967128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:36:59.970743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:36:59.974433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:37:01.977783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:37:01.983214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-782022 -n functional-782022
helpers_test.go:269: (dbg) Run:  kubectl --context functional-782022 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-gv7jt hello-node-connect-7d85dfc575-9rv7l mysql-5bb876957f-2z7m2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-lgh7n kubernetes-dashboard-855c9754f9-8bxd5
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-782022 describe pod busybox-mount hello-node-75c85bcc94-gv7jt hello-node-connect-7d85dfc575-9rv7l mysql-5bb876957f-2z7m2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-lgh7n kubernetes-dashboard-855c9754f9-8bxd5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-782022 describe pod busybox-mount hello-node-75c85bcc94-gv7jt hello-node-connect-7d85dfc575-9rv7l mysql-5bb876957f-2z7m2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-lgh7n kubernetes-dashboard-855c9754f9-8bxd5: exit status 1 (103.616415ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-782022/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:31:52 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  containerd://7a8e04cbed167cce1d20d82f49a2d721d764fcfe23c8ad518a617952ec22d7d4
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 29 Sep 2025 12:31:55 +0000
	      Finished:     Mon, 29 Sep 2025 12:31:55 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kqb4t (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-kqb4t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m11s  default-scheduler  Successfully assigned default/busybox-mount to functional-782022
	  Normal  Pulling    5m11s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m9s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.142s (2.142s including waiting). Image size: 2395207 bytes.
	  Normal  Created    5m9s   kubelet            Created container: mount-munger
	  Normal  Started    5m9s   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-gv7jt
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-782022/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:25:39 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gfsw5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gfsw5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-gv7jt to functional-782022
	  Normal   Pulling    8m19s (x5 over 11m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     8m16s (x5 over 11m)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     8m16s (x5 over 11m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    77s (x41 over 11m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     77s (x41 over 11m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-9rv7l
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-782022/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:25:41 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nq9cb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nq9cb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  11m                 default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-9rv7l to functional-782022
	  Normal   Pulling    8m7s (x5 over 11m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     8m4s (x5 over 11m)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     8m4s (x5 over 11m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    82s (x40 over 11m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     71s (x41 over 11m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             mysql-5bb876957f-2z7m2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-782022/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:31:22 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9qzbr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9qzbr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m41s                  default-scheduler  Successfully assigned default/mysql-5bb876957f-2z7m2 to functional-782022
	  Normal   Pulling    2m32s (x5 over 5m42s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m29s (x5 over 5m39s)  kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m29s (x5 over 5m39s)  kubelet            Error: ErrImagePull
	  Warning  Failed     31s (x20 over 5m39s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    19s (x21 over 5m39s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-782022/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:25:41 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rvshw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rvshw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/nginx-svc to functional-782022
	  Normal   Pulling    8m12s (x5 over 11m)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     8m9s (x5 over 11m)   kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     8m9s (x5 over 11m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    77s (x42 over 11m)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     77s (x42 over 11m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-782022/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:25:46 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lspjd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-lspjd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  11m                 default-scheduler  Successfully assigned default/sp-pod to functional-782022
	  Normal   Pulling    8m6s (x5 over 11m)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     8m2s (x5 over 11m)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     8m2s (x5 over 11m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    66s (x41 over 11m)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     66s (x41 over 11m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-lgh7n" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-8bxd5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-782022 describe pod busybox-mount hello-node-75c85bcc94-gv7jt hello-node-connect-7d85dfc575-9rv7l mysql-5bb876957f-2z7m2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-lgh7n kubernetes-dashboard-855c9754f9-8bxd5: exit status 1
E0929 12:40:20.683613 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctional/parallel/DashboardCmd (302.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-782022 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-782022 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-9rv7l" [ae8a4896-b6e0-4b77-979c-178f02f8aed1] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-782022 -n functional-782022
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-09-29 12:35:41.98143587 +0000 UTC m=+1106.565459059
functional_test.go:1645: (dbg) Run:  kubectl --context functional-782022 describe po hello-node-connect-7d85dfc575-9rv7l -n default
functional_test.go:1645: (dbg) kubectl --context functional-782022 describe po hello-node-connect-7d85dfc575-9rv7l -n default:
Name:             hello-node-connect-7d85dfc575-9rv7l
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-782022/192.168.49.2
Start Time:       Mon, 29 Sep 2025 12:25:41 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nq9cb (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-nq9cb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-9rv7l to functional-782022
Normal   Pulling    6m45s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m42s (x5 over 9m55s)   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m42s (x5 over 9m55s)   kubelet            Error: ErrImagePull
Warning  Failed     4m44s (x20 over 9m54s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m33s (x21 over 9m54s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-782022 logs hello-node-connect-7d85dfc575-9rv7l -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-782022 logs hello-node-connect-7d85dfc575-9rv7l -n default: exit status 1 (71.767944ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-9rv7l" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-782022 logs hello-node-connect-7d85dfc575-9rv7l -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-782022 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-9rv7l
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-782022/192.168.49.2
Start Time:       Mon, 29 Sep 2025 12:25:41 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nq9cb (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-nq9cb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-9rv7l to functional-782022
Normal   Pulling    6m45s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m42s (x5 over 9m55s)   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m42s (x5 over 9m55s)   kubelet            Error: ErrImagePull
Warning  Failed     4m44s (x20 over 9m54s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m33s (x21 over 9m54s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-782022 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-782022 logs -l app=hello-node-connect: exit status 1 (61.880329ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-9rv7l" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-782022 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-782022 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.109.36.175
IPs:                      10.109.36.175
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30115/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-782022
helpers_test.go:243: (dbg) docker inspect functional-782022:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1786c46f38527c7de691196a7fdb1ca120239ca565db05edd3bce1d85e01f298",
	        "Created": "2025-09-29T12:23:56.273004679Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1134005,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T12:23:56.310015171Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/1786c46f38527c7de691196a7fdb1ca120239ca565db05edd3bce1d85e01f298/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1786c46f38527c7de691196a7fdb1ca120239ca565db05edd3bce1d85e01f298/hostname",
	        "HostsPath": "/var/lib/docker/containers/1786c46f38527c7de691196a7fdb1ca120239ca565db05edd3bce1d85e01f298/hosts",
	        "LogPath": "/var/lib/docker/containers/1786c46f38527c7de691196a7fdb1ca120239ca565db05edd3bce1d85e01f298/1786c46f38527c7de691196a7fdb1ca120239ca565db05edd3bce1d85e01f298-json.log",
	        "Name": "/functional-782022",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-782022:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-782022",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1786c46f38527c7de691196a7fdb1ca120239ca565db05edd3bce1d85e01f298",
	                "LowerDir": "/var/lib/docker/overlay2/3c5ab5481c7a994cd5d59e2e9db0e3dcde4fc8f67196d7f2e7829042bdd20fba-init/diff:/var/lib/docker/overlay2/fbd0ff8837aea1062458ef3b6c2ff01f7caaf77470820d108a1f7ca188c98aa7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3c5ab5481c7a994cd5d59e2e9db0e3dcde4fc8f67196d7f2e7829042bdd20fba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3c5ab5481c7a994cd5d59e2e9db0e3dcde4fc8f67196d7f2e7829042bdd20fba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3c5ab5481c7a994cd5d59e2e9db0e3dcde4fc8f67196d7f2e7829042bdd20fba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-782022",
	                "Source": "/var/lib/docker/volumes/functional-782022/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-782022",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-782022",
	                "name.minikube.sigs.k8s.io": "functional-782022",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "50c97dcabfb7e5784a6ece9564a867bb96ac9a43766c5bddf69d122258ab8e5a",
	            "SandboxKey": "/var/run/docker/netns/50c97dcabfb7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33276"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33277"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33280"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33278"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33279"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-782022": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:3f:40:a7:34:77",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1188f63361a712974d693e573886a912852ad97f16abca5373c8b53f08ee79f7",
	                    "EndpointID": "aaa66c085157d64022cbc7f79013ed8657f004e3bea603739a48e13aecf19afc",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-782022",
	                        "1786c46f3852"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-782022 -n functional-782022
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-782022 logs -n 25: (1.442390126s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-782022 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │ 29 Sep 25 12:31 UTC │
	│ ssh       │ functional-782022 ssh -- ls -la /mount-9p                                                                                         │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │ 29 Sep 25 12:31 UTC │
	│ ssh       │ functional-782022 ssh cat /mount-9p/test-1759149110934407752                                                                      │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │ 29 Sep 25 12:31 UTC │
	│ ssh       │ functional-782022 ssh stat /mount-9p/created-by-test                                                                              │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │ 29 Sep 25 12:31 UTC │
	│ ssh       │ functional-782022 ssh stat /mount-9p/created-by-pod                                                                               │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │ 29 Sep 25 12:31 UTC │
	│ ssh       │ functional-782022 ssh sudo umount -f /mount-9p                                                                                    │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │ 29 Sep 25 12:31 UTC │
	│ mount     │ -p functional-782022 /tmp/TestFunctionalparallelMountCmdspecific-port3447882230/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │                     │
	│ ssh       │ functional-782022 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │                     │
	│ ssh       │ functional-782022 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │ 29 Sep 25 12:31 UTC │
	│ ssh       │ functional-782022 ssh -- ls -la /mount-9p                                                                                         │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │ 29 Sep 25 12:31 UTC │
	│ ssh       │ functional-782022 ssh sudo umount -f /mount-9p                                                                                    │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │                     │
	│ ssh       │ functional-782022 ssh findmnt -T /mount1                                                                                          │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │                     │
	│ mount     │ -p functional-782022 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3370057904/001:/mount1 --alsologtostderr -v=1                │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │                     │
	│ mount     │ -p functional-782022 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3370057904/001:/mount2 --alsologtostderr -v=1                │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │                     │
	│ mount     │ -p functional-782022 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3370057904/001:/mount3 --alsologtostderr -v=1                │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │                     │
	│ ssh       │ functional-782022 ssh findmnt -T /mount1                                                                                          │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:32 UTC │ 29 Sep 25 12:32 UTC │
	│ ssh       │ functional-782022 ssh findmnt -T /mount2                                                                                          │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:32 UTC │ 29 Sep 25 12:32 UTC │
	│ ssh       │ functional-782022 ssh findmnt -T /mount3                                                                                          │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:32 UTC │ 29 Sep 25 12:32 UTC │
	│ mount     │ -p functional-782022 --kill=true                                                                                                  │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:32 UTC │                     │
	│ start     │ -p functional-782022 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd                   │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:32 UTC │                     │
	│ start     │ -p functional-782022 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                             │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:32 UTC │                     │
	│ start     │ -p functional-782022 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd                   │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:32 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-782022 --alsologtostderr -v=1                                                                    │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:32 UTC │                     │
	│ service   │ functional-782022 service list                                                                                                    │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │ 29 Sep 25 12:35 UTC │
	│ service   │ functional-782022 service list -o json                                                                                            │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │                     │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 12:32:01
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 12:32:01.623393 1152286 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:32:01.623483 1152286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:32:01.623490 1152286 out.go:374] Setting ErrFile to fd 2...
	I0929 12:32:01.623494 1152286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:32:01.623773 1152286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1097891/.minikube/bin
	I0929 12:32:01.624207 1152286 out.go:368] Setting JSON to false
	I0929 12:32:01.625266 1152286 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":18859,"bootTime":1759130263,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 12:32:01.625351 1152286 start.go:140] virtualization: kvm guest
	I0929 12:32:01.627031 1152286 out.go:179] * [functional-782022] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0929 12:32:01.628165 1152286 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 12:32:01.628190 1152286 notify.go:220] Checking for updates...
	I0929 12:32:01.630582 1152286 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:32:01.631578 1152286 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 12:32:01.632499 1152286 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1097891/.minikube
	I0929 12:32:01.633553 1152286 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 12:32:01.634463 1152286 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 12:32:01.635841 1152286 config.go:182] Loaded profile config "functional-782022": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 12:32:01.636357 1152286 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:32:01.659705 1152286 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 12:32:01.659797 1152286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:32:01.714441 1152286 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-29 12:32:01.703718947 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:32:01.714537 1152286 docker.go:318] overlay module found
	I0929 12:32:01.716020 1152286 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0929 12:32:01.717088 1152286 start.go:304] selected driver: docker
	I0929 12:32:01.717115 1152286 start.go:924] validating driver "docker" against &{Name:functional-782022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-782022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:32:01.717223 1152286 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 12:32:01.718814 1152286 out.go:203] 
	W0929 12:32:01.719743 1152286 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0929 12:32:01.720687 1152286 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7a8e04cbed167       56cc512116c8f       3 minutes ago       Exited              mount-munger              0                   75e91da73587b       busybox-mount
	897d454b93764       6e38f40d628db       10 minutes ago      Running             storage-provisioner       3                   81f3483cb1bf7       storage-provisioner
	bcc5e572f0ecc       90550c43ad2bc       10 minutes ago      Running             kube-apiserver            0                   1eff7df7d085a       kube-apiserver-functional-782022
	f45ad0c405faa       46169d968e920       10 minutes ago      Running             kube-scheduler            1                   1ab56f1d7c59d       kube-scheduler-functional-782022
	f2c5ba8dccddf       5f1f5298c888d       10 minutes ago      Running             etcd                      1                   684c3b624bc22       etcd-functional-782022
	11011c88597fc       a0af72f2ec6d6       10 minutes ago      Running             kube-controller-manager   1                   8d20f6e965d70       kube-controller-manager-functional-782022
	1a1e77b489c91       6e38f40d628db       10 minutes ago      Exited              storage-provisioner       2                   81f3483cb1bf7       storage-provisioner
	0bbcc10ce4fbd       52546a367cc9e       10 minutes ago      Running             coredns                   1                   05c745f61eda2       coredns-66bc5c9577-zm6rn
	7184ba4a9a391       409467f978b4a       10 minutes ago      Running             kindnet-cni               1                   bbcc18db59f8f       kindnet-gk4hp
	7def58db713d2       df0860106674d       10 minutes ago      Running             kube-proxy                1                   560b48c7f2dd0       kube-proxy-dnlcd
	43db49e8ed43f       52546a367cc9e       11 minutes ago      Exited              coredns                   0                   05c745f61eda2       coredns-66bc5c9577-zm6rn
	4a8d7cb63e108       409467f978b4a       11 minutes ago      Exited              kindnet-cni               0                   bbcc18db59f8f       kindnet-gk4hp
	f8040ae956a29       df0860106674d       11 minutes ago      Exited              kube-proxy                0                   560b48c7f2dd0       kube-proxy-dnlcd
	b0de26d17b60c       5f1f5298c888d       11 minutes ago      Exited              etcd                      0                   684c3b624bc22       etcd-functional-782022
	c72893cda0718       a0af72f2ec6d6       11 minutes ago      Exited              kube-controller-manager   0                   8d20f6e965d70       kube-controller-manager-functional-782022
	a87bec1b3ee14       46169d968e920       11 minutes ago      Exited              kube-scheduler            0                   1ab56f1d7c59d       kube-scheduler-functional-782022
	
	
	==> containerd <==
	Sep 29 12:33:33 functional-782022 containerd[3890]: time="2025-09-29T12:33:33.582298007Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Sep 29 12:33:33 functional-782022 containerd[3890]: time="2025-09-29T12:33:33.584153855Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:33:34 functional-782022 containerd[3890]: time="2025-09-29T12:33:34.248008491Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:33:36 functional-782022 containerd[3890]: time="2025-09-29T12:33:36.094586871Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:33:36 functional-782022 containerd[3890]: time="2025-09-29T12:33:36.094638707Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	Sep 29 12:33:36 functional-782022 containerd[3890]: time="2025-09-29T12:33:36.582039778Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 29 12:33:36 functional-782022 containerd[3890]: time="2025-09-29T12:33:36.584010213Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:33:37 functional-782022 containerd[3890]: time="2025-09-29T12:33:37.240446892Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:33:39 functional-782022 containerd[3890]: time="2025-09-29T12:33:39.090831737Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:33:39 functional-782022 containerd[3890]: time="2025-09-29T12:33:39.090880790Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Sep 29 12:34:32 functional-782022 containerd[3890]: time="2025-09-29T12:34:32.584696712Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	Sep 29 12:34:32 functional-782022 containerd[3890]: time="2025-09-29T12:34:32.586703104Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:34:33 functional-782022 containerd[3890]: time="2025-09-29T12:34:33.246710875Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:34:35 functional-782022 containerd[3890]: time="2025-09-29T12:34:35.125251627Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:34:35 functional-782022 containerd[3890]: time="2025-09-29T12:34:35.125340474Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=10965"
	Sep 29 12:35:06 functional-782022 containerd[3890]: time="2025-09-29T12:35:06.582119315Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Sep 29 12:35:06 functional-782022 containerd[3890]: time="2025-09-29T12:35:06.584027676Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:35:07 functional-782022 containerd[3890]: time="2025-09-29T12:35:07.242289095Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:35:09 functional-782022 containerd[3890]: time="2025-09-29T12:35:09.115075869Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:35:09 functional-782022 containerd[3890]: time="2025-09-29T12:35:09.115159590Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	Sep 29 12:35:09 functional-782022 containerd[3890]: time="2025-09-29T12:35:09.115938966Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 29 12:35:09 functional-782022 containerd[3890]: time="2025-09-29T12:35:09.117266601Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:35:09 functional-782022 containerd[3890]: time="2025-09-29T12:35:09.770483915Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:35:11 functional-782022 containerd[3890]: time="2025-09-29T12:35:11.623297784Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:35:11 functional-782022 containerd[3890]: time="2025-09-29T12:35:11.623366154Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11014"
	
	
	==> coredns [0bbcc10ce4fbd6447f09bdba14f490c4feeec967a90d14aae73d8da93b645593] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48424 - 35954 "HINFO IN 1036536613831132561.4260401396292498277. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.044239158s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [43db49e8ed43f24eb4141339439623039f1c25d6fa08c6a6973f5121b66d3b14] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35447 - 30519 "HINFO IN 5997367880324166599.6195756209000097168. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.028452503s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-782022
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-782022
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=functional-782022
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T12_24_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 12:24:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-782022
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 12:35:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 12:32:13 +0000   Mon, 29 Sep 2025 12:24:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 12:32:13 +0000   Mon, 29 Sep 2025 12:24:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 12:32:13 +0000   Mon, 29 Sep 2025 12:24:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 12:32:13 +0000   Mon, 29 Sep 2025 12:24:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-782022
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 4e290b74a50b4c5797f445ba16a9585a
	  System UUID:                f8a45455-22cc-4de2-93e1-996efdb799ef
	  Boot ID:                    c950b162-3ea4-4410-8c2e-1238f18b29b9
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-gv7jt                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-9rv7l           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-2z7m2                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     4m21s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m57s
	  kube-system                 coredns-66bc5c9577-zm6rn                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-782022                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-gk4hp                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-782022              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-782022     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-dnlcd                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-782022              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-lgh7n    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8bxd5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-782022 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-782022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-782022 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-782022 event: Registered Node functional-782022 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-782022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-782022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-782022 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-782022 event: Registered Node functional-782022 in Controller
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 02 e7 e8 51 10 6b 08 06
	[  +1.517728] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae a5 e4 37 95 62 08 06
	[  +0.115888] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 81 e5 e6 16 48 08 06
	[ +12.890125] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 a3 59 25 5e a0 08 06
	[  +0.000394] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 02 e7 e8 51 10 6b 08 06
	[  +5.179291] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e f5 e3 4f f3 1f 08 06
	[Sep29 12:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e 41 b4 9f 67 06 08 06
	[ +13.445656] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 1e 7c f1 b5 0d 08 06
	[  +0.000381] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5a 81 e5 e6 16 48 08 06
	[  +7.699318] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 ba 46 0d 66 00 08 06
	[  +0.000403] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0e 41 b4 9f 67 06 08 06
	[  +4.637857] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 16 6b 9e 59 3c 08 06
	[  +0.000369] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e f5 e3 4f f3 1f 08 06
	
	
	==> etcd [b0de26d17b60cee3ea0ffbb0def9bf68b3b7b8fec1a615de7e7e3b1ffdbf44d3] <==
	{"level":"warn","ts":"2025-09-29T12:24:09.508315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:24:09.514616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:24:09.520742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:24:09.528061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:24:09.539044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:24:09.545262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:24:09.552397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55366","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T12:25:10.869561Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T12:25:10.869658Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-782022","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-29T12:25:10.869764Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T12:25:10.871378Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T12:25:10.871458Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:25:10.871477Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-29T12:25:10.871570Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-29T12:25:10.871571Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:25:10.871554Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-09-29T12:25:10.871603Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-09-29T12:25:10.871602Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:25:10.871615Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T12:25:10.871618Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-09-29T12:25:10.871626Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:25:10.873625Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-29T12:25:10.873686Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:25:10.873718Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-29T12:25:10.873728Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-782022","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [f2c5ba8dccddf943d6966d386ff390e8a8543cb1ecced1f03cc46a777ec12f12] <==
	{"level":"warn","ts":"2025-09-29T12:25:13.996935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.005286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.011672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.017730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.029057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.034829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.042108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.048270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.054551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.060723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.067227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.073406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.081074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.088521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.095420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.102293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.109042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.126214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.129431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.136215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.142728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.187334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42494","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T12:35:13.684888Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1072}
	{"level":"info","ts":"2025-09-29T12:35:13.705722Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1072,"took":"20.470272ms","hash":1187297717,"current-db-size-bytes":3743744,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":1847296,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-09-29T12:35:13.705768Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1187297717,"revision":1072,"compact-revision":-1}
	
	
	==> kernel <==
	 12:35:43 up  5:18,  0 users,  load average: 0.53, 0.49, 1.32
	Linux functional-782022 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [4a8d7cb63e108e5508f3caa9c423a89228d2d7dd659718cf9c657cb63b960bd6] <==
	I0929 12:24:19.051272       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0929 12:24:19.051515       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0929 12:24:19.051679       1 main.go:148] setting mtu 1500 for CNI 
	I0929 12:24:19.051696       1 main.go:178] kindnetd IP family: "ipv4"
	I0929 12:24:19.051709       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-29T12:24:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0929 12:24:19.254620       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0929 12:24:19.254648       1 controller.go:381] "Waiting for informer caches to sync"
	I0929 12:24:19.254660       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0929 12:24:19.332675       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0929 12:24:19.732111       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0929 12:24:19.732144       1 metrics.go:72] Registering metrics
	I0929 12:24:19.732204       1 controller.go:711] "Syncing nftables rules"
	I0929 12:24:29.255389       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:24:29.255476       1 main.go:301] handling current node
	I0929 12:24:39.258047       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:24:39.258081       1 main.go:301] handling current node
	I0929 12:24:49.264050       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:24:49.264086       1 main.go:301] handling current node
	I0929 12:24:59.256428       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:24:59.256509       1 main.go:301] handling current node
	
	
	==> kindnet [7184ba4a9a391b6fe9af875c4cf7b7ec1e446596766a273e493fd073ac49febe] <==
	I0929 12:33:42.039776       1 main.go:301] handling current node
	I0929 12:33:52.034869       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:33:52.034902       1 main.go:301] handling current node
	I0929 12:34:02.034322       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:34:02.034366       1 main.go:301] handling current node
	I0929 12:34:12.034953       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:34:12.035007       1 main.go:301] handling current node
	I0929 12:34:22.043052       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:34:22.043088       1 main.go:301] handling current node
	I0929 12:34:32.035072       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:34:32.035110       1 main.go:301] handling current node
	I0929 12:34:42.034387       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:34:42.034426       1 main.go:301] handling current node
	I0929 12:34:52.035258       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:34:52.035297       1 main.go:301] handling current node
	I0929 12:35:02.035455       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:35:02.035510       1 main.go:301] handling current node
	I0929 12:35:12.034635       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:35:12.034679       1 main.go:301] handling current node
	I0929 12:35:22.036604       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:35:22.036643       1 main.go:301] handling current node
	I0929 12:35:32.038173       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:35:32.038208       1 main.go:301] handling current node
	I0929 12:35:42.033873       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:35:42.033923       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bcc5e572f0ecc825bbe90fc103944f2ce30f09993a6a7567dc1ecef379c4c13c] <==
	I0929 12:25:35.630809       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.26.213"}
	I0929 12:25:39.875572       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.96.146.92"}
	I0929 12:25:41.209278       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.99.154.94"}
	I0929 12:25:41.675258       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.109.36.175"}
	I0929 12:26:14.935892       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:26:16.129046       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:27:40.912383       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:27:42.979991       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:28:41.057645       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:29:07.263713       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:29:44.231603       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:30:07.893764       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:30:58.445280       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:31:17.663939       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:31:22.285425       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.103.89.153"}
	I0929 12:32:02.602777       1 controller.go:667] quota admission added evaluator for: namespaces
	I0929 12:32:02.704384       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.83.159"}
	I0929 12:32:02.714442       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.200.223"}
	I0929 12:32:13.424480       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:32:26.436667       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:33:18.083588       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:33:41.415637       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:34:38.700855       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:35:03.361330       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:35:14.619132       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [11011c88597fc968304178a065fd8bcd3e220e2bbf17e0361a296ac56203173b] <==
	I0929 12:25:18.003261       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:25:18.008602       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 12:25:18.010845       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0929 12:25:18.015307       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 12:25:18.015736       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0929 12:25:18.015770       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0929 12:25:18.016951       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 12:25:18.016999       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0929 12:25:18.017008       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 12:25:18.017061       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0929 12:25:18.017070       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 12:25:18.017107       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 12:25:18.017114       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 12:25:18.017207       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0929 12:25:18.018697       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0929 12:25:18.021924       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 12:25:18.022150       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:25:18.024225       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0929 12:25:18.038487       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0929 12:32:02.648029       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 12:32:02.652165       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 12:32:02.652282       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 12:32:02.655219       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 12:32:02.656606       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 12:32:02.660668       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [c72893cda07183eef7eadd6ed0b844f67fdb9a7c8349578dc21bde2e7c064d97] <==
	I0929 12:24:17.256686       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 12:24:17.256794       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 12:24:17.256803       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0929 12:24:17.257121       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0929 12:24:17.257121       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0929 12:24:17.257152       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 12:24:17.257295       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 12:24:17.257472       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 12:24:17.257487       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0929 12:24:17.259069       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 12:24:17.259091       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 12:24:17.259216       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 12:24:17.259310       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-782022"
	I0929 12:24:17.259352       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 12:24:17.260382       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 12:24:17.260468       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0929 12:24:17.260617       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0929 12:24:17.260680       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0929 12:24:17.260695       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0929 12:24:17.260702       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0929 12:24:17.260737       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:24:17.263845       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:24:17.267580       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0929 12:24:17.271688       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-782022" podCIDRs=["10.244.0.0/24"]
	I0929 12:24:17.278010       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [7def58db713d2ef060792b4dec8bbc81106fc7d4ec2b7322f8bfd3474f0c638f] <==
	I0929 12:25:01.693066       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0929 12:25:01.694237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-782022&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 12:25:02.740797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-782022&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 12:25:04.709796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-782022&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 12:25:09.735699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-782022&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0929 12:25:20.593272       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 12:25:20.593323       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 12:25:20.593446       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 12:25:20.616514       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 12:25:20.616586       1 server_linux.go:132] "Using iptables Proxier"
	I0929 12:25:20.622267       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 12:25:20.622658       1 server.go:527] "Version info" version="v1.34.0"
	I0929 12:25:20.622676       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:25:20.624180       1 config.go:200] "Starting service config controller"
	I0929 12:25:20.624202       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 12:25:20.624207       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 12:25:20.624222       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 12:25:20.624247       1 config.go:106] "Starting endpoint slice config controller"
	I0929 12:25:20.624252       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 12:25:20.624279       1 config.go:309] "Starting node config controller"
	I0929 12:25:20.624291       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 12:25:20.725264       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 12:25:20.725297       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 12:25:20.725368       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 12:25:20.725390       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [f8040ae956a29db65d75560d0d961054428ad3355739826fdfa6c4689553ce6c] <==
	I0929 12:24:18.577732       1 server_linux.go:53] "Using iptables proxy"
	I0929 12:24:18.633856       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 12:24:18.734060       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 12:24:18.734113       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 12:24:18.734257       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 12:24:18.850654       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 12:24:18.850737       1 server_linux.go:132] "Using iptables Proxier"
	I0929 12:24:18.857213       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 12:24:18.857575       1 server.go:527] "Version info" version="v1.34.0"
	I0929 12:24:18.857597       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:24:18.859119       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 12:24:18.859526       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 12:24:18.859143       1 config.go:200] "Starting service config controller"
	I0929 12:24:18.859205       1 config.go:309] "Starting node config controller"
	I0929 12:24:18.859571       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 12:24:18.859580       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 12:24:18.859600       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 12:24:18.859300       1 config.go:106] "Starting endpoint slice config controller"
	I0929 12:24:18.859730       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 12:24:18.959729       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 12:24:18.959820       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 12:24:18.959861       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a87bec1b3ee14c32cd766e09316d631522aea0fcba6f24d9c0d707e90c6859a0] <==
	E0929 12:24:10.048487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 12:24:10.048503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 12:24:10.048519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 12:24:10.048543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 12:24:10.048540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 12:24:10.048541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 12:24:10.048653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 12:24:10.049080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 12:24:10.049118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 12:24:10.884036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 12:24:10.973647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 12:24:10.995046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 12:24:11.049696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 12:24:11.081784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 12:24:11.196404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 12:24:11.198401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 12:24:11.218661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 12:24:11.362047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0929 12:24:13.945049       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:25:10.988013       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:25:10.988132       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 12:25:10.988193       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 12:25:10.988224       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 12:25:10.988263       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 12:25:10.988295       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f45ad0c405faa9c17629b0b8f69da5b981a87271e7727e52d833a13ae68a780b] <==
	I0929 12:25:14.313310       1 serving.go:386] Generated self-signed cert in-memory
	I0929 12:25:14.639875       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 12:25:14.639900       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:25:14.644894       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0929 12:25:14.644913       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:25:14.644912       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 12:25:14.644932       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0929 12:25:14.644936       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:25:14.644941       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 12:25:14.645395       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 12:25:14.645456       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 12:25:14.745516       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 12:25:14.745524       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:25:14.745526       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Sep 29 12:35:08 functional-782022 kubelet[4825]: E0929 12:35:08.580714    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-gv7jt" podUID="b661658c-5c1b-440c-937c-5f64eae745c1"
	Sep 29 12:35:09 functional-782022 kubelet[4825]: E0929 12:35:09.115400    4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 12:35:09 functional-782022 kubelet[4825]: E0929 12:35:09.115470    4825 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 12:35:09 functional-782022 kubelet[4825]: E0929 12:35:09.115687    4825 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-lgh7n_kubernetes-dashboard(d0347370-124a-4ceb-87db-90f10e018aa4): ErrImagePull: failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 12:35:09 functional-782022 kubelet[4825]: E0929 12:35:09.115743    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-lgh7n" podUID="d0347370-124a-4ceb-87db-90f10e018aa4"
	Sep 29 12:35:11 functional-782022 kubelet[4825]: E0929 12:35:11.623657    4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 12:35:11 functional-782022 kubelet[4825]: E0929 12:35:11.623720    4825 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 12:35:11 functional-782022 kubelet[4825]: E0929 12:35:11.623825    4825 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-8bxd5_kubernetes-dashboard(5f8be926-b1a2-41b6-aa79-115ced6fb907): ErrImagePull: failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 12:35:11 functional-782022 kubelet[4825]: E0929 12:35:11.623862    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8bxd5" podUID="5f8be926-b1a2-41b6-aa79-115ced6fb907"
	Sep 29 12:35:12 functional-782022 kubelet[4825]: E0929 12:35:12.581345    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-2z7m2" podUID="e3e3d6a7-8d43-48c1-8a0e-c35b42f327b4"
	Sep 29 12:35:18 functional-782022 kubelet[4825]: E0929 12:35:18.580944    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="06f2ff8c-eced-4819-bcca-da8efb85234c"
	Sep 29 12:35:18 functional-782022 kubelet[4825]: E0929 12:35:18.581075    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-9rv7l" podUID="ae8a4896-b6e0-4b77-979c-178f02f8aed1"
	Sep 29 12:35:19 functional-782022 kubelet[4825]: E0929 12:35:19.581528    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="6f4f78d3-af0d-455f-a1a8-728cfbe1024e"
	Sep 29 12:35:21 functional-782022 kubelet[4825]: E0929 12:35:21.580696    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-gv7jt" podUID="b661658c-5c1b-440c-937c-5f64eae745c1"
	Sep 29 12:35:23 functional-782022 kubelet[4825]: E0929 12:35:23.581133    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-lgh7n" podUID="d0347370-124a-4ceb-87db-90f1
0e018aa4"
	Sep 29 12:35:24 functional-782022 kubelet[4825]: E0929 12:35:24.581441    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-2z7m2" podUID="e3e3d6a7-8d43-48c1-8a0e-c35b42f327b4"
	Sep 29 12:35:26 functional-782022 kubelet[4825]: E0929 12:35:26.585484    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8bxd5" podUID="5f8be926-b1a2-41b6-aa79-115ced6fb907"
	Sep 29 12:35:30 functional-782022 kubelet[4825]: E0929 12:35:30.581495    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-9rv7l" podUID="ae8a4896-b6e0-4b77-979c-178f02f8aed1"
	Sep 29 12:35:33 functional-782022 kubelet[4825]: E0929 12:35:33.580596    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="06f2ff8c-eced-4819-bcca-da8efb85234c"
	Sep 29 12:35:34 functional-782022 kubelet[4825]: E0929 12:35:34.585827    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-gv7jt" podUID="b661658c-5c1b-440c-937c-5f64eae745c1"
	Sep 29 12:35:34 functional-782022 kubelet[4825]: E0929 12:35:34.586089    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="6f4f78d3-af0d-455f-a1a8-728cfbe1024e"
	Sep 29 12:35:36 functional-782022 kubelet[4825]: E0929 12:35:36.584487    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-2z7m2" podUID="e3e3d6a7-8d43-48c1-8a0e-c35b42f327b4"
	Sep 29 12:35:36 functional-782022 kubelet[4825]: E0929 12:35:36.584660    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-lgh7n" podUID="d0347370-124a-4ceb-87db-90f1
0e018aa4"
	Sep 29 12:35:41 functional-782022 kubelet[4825]: E0929 12:35:41.582142    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8bxd5" podUID="5f8be926-b1a2-41b6-aa79-115ced6fb907"
	Sep 29 12:35:42 functional-782022 kubelet[4825]: E0929 12:35:42.581601    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-9rv7l" podUID="ae8a4896-b6e0-4b77-979c-178f02f8aed1"
	
	
	==> storage-provisioner [1a1e77b489c91eed8244dea845d1c614d96d31d5eeb78fb695a95f6dcd0cc57d] <==
	I0929 12:25:07.440515       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 12:25:07.443666       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [897d454b937641e694c437680fd88a379717c743494df6b5e5785964165119dd] <==
	W0929 12:35:19.593065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:35:21.596702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:35:21.600835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:35:23.603585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:35:23.607288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:35:25.611174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:35:25.615511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:35:27.619372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:35:27.625199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:35:29.628494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:35:29.632447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:35:31.636011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:35:31.640027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:35:33.642989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:35:33.646820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:35:35.649355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:35:35.652982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:35:37.656473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:35:37.660115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:35:39.663092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:35:39.666772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:35:41.670296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:35:41.674024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:35:43.678103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:35:43.682544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-782022 -n functional-782022
helpers_test.go:269: (dbg) Run:  kubectl --context functional-782022 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-gv7jt hello-node-connect-7d85dfc575-9rv7l mysql-5bb876957f-2z7m2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-lgh7n kubernetes-dashboard-855c9754f9-8bxd5
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-782022 describe pod busybox-mount hello-node-75c85bcc94-gv7jt hello-node-connect-7d85dfc575-9rv7l mysql-5bb876957f-2z7m2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-lgh7n kubernetes-dashboard-855c9754f9-8bxd5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-782022 describe pod busybox-mount hello-node-75c85bcc94-gv7jt hello-node-connect-7d85dfc575-9rv7l mysql-5bb876957f-2z7m2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-lgh7n kubernetes-dashboard-855c9754f9-8bxd5: exit status 1 (104.944348ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-782022/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:31:52 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  containerd://7a8e04cbed167cce1d20d82f49a2d721d764fcfe23c8ad518a617952ec22d7d4
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 29 Sep 2025 12:31:55 +0000
	      Finished:     Mon, 29 Sep 2025 12:31:55 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kqb4t (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-kqb4t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  3m51s  default-scheduler  Successfully assigned default/busybox-mount to functional-782022
	  Normal  Pulling    3m51s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     3m49s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.142s (2.142s including waiting). Image size: 2395207 bytes.
	  Normal  Created    3m49s  kubelet            Created container: mount-munger
	  Normal  Started    3m49s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-gv7jt
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-782022/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:25:39 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gfsw5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gfsw5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-gv7jt to functional-782022
	  Normal   Pulling    6m59s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m56s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m56s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m58s (x19 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m34s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-9rv7l
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-782022/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:25:41 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nq9cb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nq9cb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-9rv7l to functional-782022
	  Normal   Pulling    6m47s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m44s (x5 over 9m57s)   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m44s (x5 over 9m57s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m46s (x20 over 9m56s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2s (x40 over 9m56s)     kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-2z7m2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-782022/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:31:22 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9qzbr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9qzbr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  4m22s                 default-scheduler  Successfully assigned default/mysql-5bb876957f-2z7m2 to functional-782022
	  Normal   Pulling    72s (x5 over 4m22s)   kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     69s (x5 over 4m19s)   kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     69s (x5 over 4m19s)   kubelet            Error: ErrImagePull
	  Warning  Failed     20s (x15 over 4m19s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    8s (x16 over 4m19s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-782022/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:25:41 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rvshw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rvshw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/nginx-svc to functional-782022
	  Normal   Pulling    6m52s (x5 over 10m)     kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     6m49s (x5 over 9m59s)   kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m49s (x5 over 9m59s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m54s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m39s (x21 over 9m59s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-782022/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:25:46 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lspjd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-lspjd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m58s                   default-scheduler  Successfully assigned default/sp-pod to functional-782022
	  Normal   Pulling    6m46s (x5 over 9m58s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     6m42s (x5 over 9m54s)   kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m42s (x5 over 9m54s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m50s (x19 over 9m54s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m23s (x21 over 9m54s)  kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-lgh7n" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-8bxd5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-782022 describe pod busybox-mount hello-node-75c85bcc94-gv7jt hello-node-connect-7d85dfc575-9rv7l mysql-5bb876957f-2z7m2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-lgh7n kubernetes-dashboard-855c9754f9-8bxd5: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.06s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (369.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [eb829926-f269-4be7-ade6-098746403540] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.077598817s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-782022 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-782022 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-782022 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-782022 apply -f testdata/storage-provisioner/pod.yaml
I0929 12:25:46.146173 1101494 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [06f2ff8c-eced-4819-bcca-da8efb85234c] Pending
helpers_test.go:352: "sp-pod" [06f2ff8c-eced-4819-bcca-da8efb85234c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0929 12:26:01.661272 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:26:42.623182 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:28:04.545188 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-782022 -n functional-782022
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-09-29 12:31:46.456673099 +0000 UTC m=+871.040696288
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-782022 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-782022 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-782022/192.168.49.2
Start Time:       Mon, 29 Sep 2025 12:25:46 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:  10.244.0.7
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lspjd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-lspjd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  6m                     default-scheduler  Successfully assigned default/sp-pod to functional-782022
Normal   Pulling    2m48s (x5 over 6m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     2m44s (x5 over 5m56s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m44s (x5 over 5m56s)  kubelet            Error: ErrImagePull
Warning  Failed     52s (x19 over 5m56s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    25s (x21 over 5m56s)   kubelet            Back-off pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-782022 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-782022 logs sp-pod -n default: exit status 1 (69.658086ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-782022 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-782022
helpers_test.go:243: (dbg) docker inspect functional-782022:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1786c46f38527c7de691196a7fdb1ca120239ca565db05edd3bce1d85e01f298",
	        "Created": "2025-09-29T12:23:56.273004679Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1134005,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T12:23:56.310015171Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/1786c46f38527c7de691196a7fdb1ca120239ca565db05edd3bce1d85e01f298/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1786c46f38527c7de691196a7fdb1ca120239ca565db05edd3bce1d85e01f298/hostname",
	        "HostsPath": "/var/lib/docker/containers/1786c46f38527c7de691196a7fdb1ca120239ca565db05edd3bce1d85e01f298/hosts",
	        "LogPath": "/var/lib/docker/containers/1786c46f38527c7de691196a7fdb1ca120239ca565db05edd3bce1d85e01f298/1786c46f38527c7de691196a7fdb1ca120239ca565db05edd3bce1d85e01f298-json.log",
	        "Name": "/functional-782022",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-782022:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-782022",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1786c46f38527c7de691196a7fdb1ca120239ca565db05edd3bce1d85e01f298",
	                "LowerDir": "/var/lib/docker/overlay2/3c5ab5481c7a994cd5d59e2e9db0e3dcde4fc8f67196d7f2e7829042bdd20fba-init/diff:/var/lib/docker/overlay2/fbd0ff8837aea1062458ef3b6c2ff01f7caaf77470820d108a1f7ca188c98aa7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3c5ab5481c7a994cd5d59e2e9db0e3dcde4fc8f67196d7f2e7829042bdd20fba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3c5ab5481c7a994cd5d59e2e9db0e3dcde4fc8f67196d7f2e7829042bdd20fba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3c5ab5481c7a994cd5d59e2e9db0e3dcde4fc8f67196d7f2e7829042bdd20fba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-782022",
	                "Source": "/var/lib/docker/volumes/functional-782022/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-782022",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-782022",
	                "name.minikube.sigs.k8s.io": "functional-782022",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "50c97dcabfb7e5784a6ece9564a867bb96ac9a43766c5bddf69d122258ab8e5a",
	            "SandboxKey": "/var/run/docker/netns/50c97dcabfb7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33276"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33277"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33280"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33278"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33279"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-782022": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:3f:40:a7:34:77",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1188f63361a712974d693e573886a912852ad97f16abca5373c8b53f08ee79f7",
	                    "EndpointID": "aaa66c085157d64022cbc7f79013ed8657f004e3bea603739a48e13aecf19afc",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-782022",
	                        "1786c46f3852"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-782022 -n functional-782022
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-782022 logs -n 25: (1.455781979s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                 ARGS                                                                                  │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-782022 ssh -n functional-782022 sudo cat /tmp/does/not/exist/cp-test.txt                                                                                   │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:25 UTC │
	│ addons  │ functional-782022 addons list                                                                                                                                         │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:25 UTC │
	│ addons  │ functional-782022 addons list -o json                                                                                                                                 │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:25 UTC │
	│ license │                                                                                                                                                                       │ minikube          │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │ 29 Sep 25 12:31 UTC │
	│ ssh     │ functional-782022 ssh sudo systemctl is-active docker                                                                                                                 │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │                     │
	│ ssh     │ functional-782022 ssh sudo systemctl is-active crio                                                                                                                   │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │                     │
	│ image   │ functional-782022 image load --daemon kicbase/echo-server:functional-782022 --alsologtostderr                                                                         │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │ 29 Sep 25 12:31 UTC │
	│ image   │ functional-782022 image ls                                                                                                                                            │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │ 29 Sep 25 12:31 UTC │
	│ image   │ functional-782022 image load --daemon kicbase/echo-server:functional-782022 --alsologtostderr                                                                         │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │ 29 Sep 25 12:31 UTC │
	│ image   │ functional-782022 image ls                                                                                                                                            │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │ 29 Sep 25 12:31 UTC │
	│ image   │ functional-782022 image load --daemon kicbase/echo-server:functional-782022 --alsologtostderr                                                                         │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │ 29 Sep 25 12:31 UTC │
	│ image   │ functional-782022 image ls                                                                                                                                            │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │ 29 Sep 25 12:31 UTC │
	│ image   │ functional-782022 image save kicbase/echo-server:functional-782022 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │ 29 Sep 25 12:31 UTC │
	│ image   │ functional-782022 image rm kicbase/echo-server:functional-782022 --alsologtostderr                                                                                    │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │ 29 Sep 25 12:31 UTC │
	│ image   │ functional-782022 image ls                                                                                                                                            │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │ 29 Sep 25 12:31 UTC │
	│ image   │ functional-782022 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr                                       │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │ 29 Sep 25 12:31 UTC │
	│ image   │ functional-782022 image ls                                                                                                                                            │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │ 29 Sep 25 12:31 UTC │
	│ image   │ functional-782022 image save --daemon kicbase/echo-server:functional-782022 --alsologtostderr                                                                         │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │ 29 Sep 25 12:31 UTC │
	│ ssh     │ functional-782022 ssh sudo cat /etc/ssl/certs/1101494.pem                                                                                                             │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │ 29 Sep 25 12:31 UTC │
	│ ssh     │ functional-782022 ssh sudo cat /usr/share/ca-certificates/1101494.pem                                                                                                 │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │ 29 Sep 25 12:31 UTC │
	│ ssh     │ functional-782022 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                              │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │ 29 Sep 25 12:31 UTC │
	│ ssh     │ functional-782022 ssh sudo cat /etc/ssl/certs/11014942.pem                                                                                                            │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │ 29 Sep 25 12:31 UTC │
	│ ssh     │ functional-782022 ssh sudo cat /usr/share/ca-certificates/11014942.pem                                                                                                │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │ 29 Sep 25 12:31 UTC │
	│ ssh     │ functional-782022 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                              │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │ 29 Sep 25 12:31 UTC │
	│ ssh     │ functional-782022 ssh sudo cat /etc/test/nested/copy/1101494/hosts                                                                                                    │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │ 29 Sep 25 12:31 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 12:24:51
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 12:24:51.891313 1139323 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:24:51.891597 1139323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:24:51.891602 1139323 out.go:374] Setting ErrFile to fd 2...
	I0929 12:24:51.891605 1139323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:24:51.891819 1139323 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1097891/.minikube/bin
	I0929 12:24:51.892296 1139323 out.go:368] Setting JSON to false
	I0929 12:24:51.893346 1139323 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":18429,"bootTime":1759130263,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 12:24:51.893443 1139323 start.go:140] virtualization: kvm guest
	I0929 12:24:51.895320 1139323 out.go:179] * [functional-782022] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 12:24:51.896264 1139323 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 12:24:51.896323 1139323 notify.go:220] Checking for updates...
	I0929 12:24:51.898425 1139323 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:24:51.899540 1139323 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 12:24:51.900592 1139323 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1097891/.minikube
	I0929 12:24:51.901625 1139323 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 12:24:51.902551 1139323 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 12:24:51.904014 1139323 config.go:182] Loaded profile config "functional-782022": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 12:24:51.904129 1139323 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:24:51.928600 1139323 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 12:24:51.928676 1139323 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:24:51.982815 1139323 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:68 SystemTime:2025-09-29 12:24:51.97257401 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:24:51.982943 1139323 docker.go:318] overlay module found
	I0929 12:24:51.984670 1139323 out.go:179] * Using the docker driver based on existing profile
	I0929 12:24:51.985681 1139323 start.go:304] selected driver: docker
	I0929 12:24:51.985690 1139323 start.go:924] validating driver "docker" against &{Name:functional-782022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-782022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetric
s:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:24:51.985798 1139323 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 12:24:51.985912 1139323 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:24:52.044897 1139323 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:68 SystemTime:2025-09-29 12:24:52.035138437 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:24:52.045605 1139323 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 12:24:52.045637 1139323 cni.go:84] Creating CNI manager for ""
	I0929 12:24:52.045695 1139323 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0929 12:24:52.045737 1139323 start.go:348] cluster config:
	{Name:functional-782022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-782022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:24:52.047428 1139323 out.go:179] * Starting "functional-782022" primary control-plane node in "functional-782022" cluster
	I0929 12:24:52.048434 1139323 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0929 12:24:52.049472 1139323 out.go:179] * Pulling base image v0.0.48 ...
	I0929 12:24:52.050413 1139323 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0929 12:24:52.050449 1139323 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0929 12:24:52.050462 1139323 cache.go:58] Caching tarball of preloaded images
	I0929 12:24:52.050513 1139323 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 12:24:52.050567 1139323 preload.go:172] Found /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0929 12:24:52.050575 1139323 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0929 12:24:52.050745 1139323 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/config.json ...
	I0929 12:24:52.072134 1139323 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 12:24:52.072148 1139323 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 12:24:52.072167 1139323 cache.go:232] Successfully downloaded all kic artifacts
	I0929 12:24:52.072198 1139323 start.go:360] acquireMachinesLock for functional-782022: {Name:mkc8f24d1469fc54909829b29661c7e8aeb5f7b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:24:52.072266 1139323 start.go:364] duration metric: took 49.254µs to acquireMachinesLock for "functional-782022"
	I0929 12:24:52.072286 1139323 start.go:96] Skipping create...Using existing machine configuration
	I0929 12:24:52.072300 1139323 fix.go:54] fixHost starting: 
	I0929 12:24:52.072595 1139323 cli_runner.go:164] Run: docker container inspect functional-782022 --format={{.State.Status}}
	I0929 12:24:52.090413 1139323 fix.go:112] recreateIfNeeded on functional-782022: state=Running err=<nil>
	W0929 12:24:52.090442 1139323 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 12:24:52.092078 1139323 out.go:252] * Updating the running docker "functional-782022" container ...
	I0929 12:24:52.092104 1139323 machine.go:93] provisionDockerMachine start ...
	I0929 12:24:52.092204 1139323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-782022
	I0929 12:24:52.110414 1139323 main.go:141] libmachine: Using SSH client type: native
	I0929 12:24:52.110654 1139323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33276 <nil> <nil>}
	I0929 12:24:52.110660 1139323 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 12:24:52.247473 1139323 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-782022
	
	I0929 12:24:52.247505 1139323 ubuntu.go:182] provisioning hostname "functional-782022"
	I0929 12:24:52.247569 1139323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-782022
	I0929 12:24:52.266299 1139323 main.go:141] libmachine: Using SSH client type: native
	I0929 12:24:52.266605 1139323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33276 <nil> <nil>}
	I0929 12:24:52.266618 1139323 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-782022 && echo "functional-782022" | sudo tee /etc/hostname
	I0929 12:24:52.418121 1139323 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-782022
	
	I0929 12:24:52.418197 1139323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-782022
	I0929 12:24:52.436562 1139323 main.go:141] libmachine: Using SSH client type: native
	I0929 12:24:52.436796 1139323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33276 <nil> <nil>}
	I0929 12:24:52.436810 1139323 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-782022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-782022/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-782022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 12:24:52.576018 1139323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 12:24:52.576040 1139323 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-1097891/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-1097891/.minikube}
	I0929 12:24:52.576062 1139323 ubuntu.go:190] setting up certificates
	I0929 12:24:52.576074 1139323 provision.go:84] configureAuth start
	I0929 12:24:52.576144 1139323 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-782022
	I0929 12:24:52.594483 1139323 provision.go:143] copyHostCerts
	I0929 12:24:52.594544 1139323 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.pem, removing ...
	I0929 12:24:52.594553 1139323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.pem
	I0929 12:24:52.594627 1139323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.pem (1078 bytes)
	I0929 12:24:52.594749 1139323 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1097891/.minikube/cert.pem, removing ...
	I0929 12:24:52.594755 1139323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1097891/.minikube/cert.pem
	I0929 12:24:52.594782 1139323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-1097891/.minikube/cert.pem (1123 bytes)
	I0929 12:24:52.594851 1139323 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1097891/.minikube/key.pem, removing ...
	I0929 12:24:52.594854 1139323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1097891/.minikube/key.pem
	I0929 12:24:52.594875 1139323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-1097891/.minikube/key.pem (1679 bytes)
	I0929 12:24:52.594933 1139323 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca-key.pem org=jenkins.functional-782022 san=[127.0.0.1 192.168.49.2 functional-782022 localhost minikube]
	I0929 12:24:52.650123 1139323 provision.go:177] copyRemoteCerts
	I0929 12:24:52.650179 1139323 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 12:24:52.650217 1139323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-782022
	I0929 12:24:52.668471 1139323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33276 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/functional-782022/id_rsa Username:docker}
	I0929 12:24:52.767846 1139323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 12:24:52.794226 1139323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0929 12:24:52.821225 1139323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 12:24:52.847570 1139323 provision.go:87] duration metric: took 271.481775ms to configureAuth
	I0929 12:24:52.847592 1139323 ubuntu.go:206] setting minikube options for container-runtime
	I0929 12:24:52.847769 1139323 config.go:182] Loaded profile config "functional-782022": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 12:24:52.847775 1139323 machine.go:96] duration metric: took 755.666617ms to provisionDockerMachine
	I0929 12:24:52.847781 1139323 start.go:293] postStartSetup for "functional-782022" (driver="docker")
	I0929 12:24:52.847789 1139323 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 12:24:52.847829 1139323 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 12:24:52.847866 1139323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-782022
	I0929 12:24:52.866643 1139323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33276 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/functional-782022/id_rsa Username:docker}
	I0929 12:24:52.966424 1139323 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 12:24:52.970265 1139323 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 12:24:52.970286 1139323 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 12:24:52.970292 1139323 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 12:24:52.970302 1139323 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 12:24:52.970312 1139323 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1097891/.minikube/addons for local assets ...
	I0929 12:24:52.970365 1139323 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1097891/.minikube/files for local assets ...
	I0929 12:24:52.970430 1139323 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem -> 11014942.pem in /etc/ssl/certs
	I0929 12:24:52.970514 1139323 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/test/nested/copy/1101494/hosts -> hosts in /etc/test/nested/copy/1101494
	I0929 12:24:52.970547 1139323 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1101494
	I0929 12:24:52.980145 1139323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem --> /etc/ssl/certs/11014942.pem (1708 bytes)
	I0929 12:24:53.006695 1139323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/test/nested/copy/1101494/hosts --> /etc/test/nested/copy/1101494/hosts (40 bytes)
	I0929 12:24:53.033129 1139323 start.go:296] duration metric: took 185.330118ms for postStartSetup
	I0929 12:24:53.033225 1139323 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 12:24:53.033266 1139323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-782022
	I0929 12:24:53.052030 1139323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33276 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/functional-782022/id_rsa Username:docker}
	I0929 12:24:53.146825 1139323 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 12:24:53.151845 1139323 fix.go:56] duration metric: took 1.07954005s for fixHost
	I0929 12:24:53.151863 1139323 start.go:83] releasing machines lock for "functional-782022", held for 1.079589264s
	I0929 12:24:53.151931 1139323 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-782022
	I0929 12:24:53.171076 1139323 ssh_runner.go:195] Run: cat /version.json
	I0929 12:24:53.171148 1139323 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 12:24:53.171157 1139323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-782022
	I0929 12:24:53.171234 1139323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-782022
	I0929 12:24:53.190358 1139323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33276 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/functional-782022/id_rsa Username:docker}
	I0929 12:24:53.191479 1139323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33276 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/functional-782022/id_rsa Username:docker}
	I0929 12:24:53.370456 1139323 ssh_runner.go:195] Run: systemctl --version
	I0929 12:24:53.375906 1139323 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 12:24:53.380866 1139323 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 12:24:53.401325 1139323 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 12:24:53.401382 1139323 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 12:24:53.411592 1139323 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 12:24:53.411609 1139323 start.go:495] detecting cgroup driver to use...
	I0929 12:24:53.411645 1139323 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 12:24:53.411693 1139323 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0929 12:24:53.425289 1139323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 12:24:53.437732 1139323 docker.go:218] disabling cri-docker service (if available) ...
	I0929 12:24:53.437783 1139323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 12:24:53.454470 1139323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 12:24:53.468115 1139323 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 12:24:53.588684 1139323 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 12:24:53.711497 1139323 docker.go:234] disabling docker service ...
	I0929 12:24:53.711561 1139323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 12:24:53.726272 1139323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 12:24:53.739356 1139323 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 12:24:53.857759 1139323 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 12:24:53.984234 1139323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 12:24:53.997738 1139323 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 12:24:54.016616 1139323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 12:24:54.027769 1139323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 12:24:54.038979 1139323 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0929 12:24:54.039040 1139323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0929 12:24:54.050976 1139323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 12:24:54.062623 1139323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 12:24:54.074229 1139323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 12:24:54.085698 1139323 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 12:24:54.096550 1139323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 12:24:54.108071 1139323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 12:24:54.119700 1139323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 12:24:54.131442 1139323 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 12:24:54.141326 1139323 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 12:24:54.151050 1139323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:24:54.270405 1139323 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 12:24:54.508999 1139323 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0929 12:24:54.509101 1139323 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0929 12:24:54.513957 1139323 start.go:563] Will wait 60s for crictl version
	I0929 12:24:54.514028 1139323 ssh_runner.go:195] Run: which crictl
	I0929 12:24:54.517653 1139323 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 12:24:54.552753 1139323 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0929 12:24:54.552814 1139323 ssh_runner.go:195] Run: containerd --version
	I0929 12:24:54.578118 1139323 ssh_runner.go:195] Run: containerd --version
	I0929 12:24:54.605124 1139323 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0929 12:24:54.606418 1139323 cli_runner.go:164] Run: docker network inspect functional-782022 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 12:24:54.624017 1139323 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0929 12:24:54.630249 1139323 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0929 12:24:54.631317 1139323 kubeadm.go:875] updating cluster {Name:functional-782022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-782022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 12:24:54.631458 1139323 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0929 12:24:54.631519 1139323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 12:24:54.668664 1139323 containerd.go:627] all images are preloaded for containerd runtime.
	I0929 12:24:54.668676 1139323 containerd.go:534] Images already preloaded, skipping extraction
	I0929 12:24:54.668727 1139323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 12:24:54.704786 1139323 containerd.go:627] all images are preloaded for containerd runtime.
	I0929 12:24:54.704799 1139323 cache_images.go:85] Images are preloaded, skipping loading
	I0929 12:24:54.704806 1139323 kubeadm.go:926] updating node { 192.168.49.2 8441 v1.34.0 containerd true true} ...
	I0929 12:24:54.704922 1139323 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-782022 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:functional-782022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 12:24:54.705011 1139323 ssh_runner.go:195] Run: sudo crictl info
	I0929 12:24:54.741459 1139323 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0929 12:24:54.741480 1139323 cni.go:84] Creating CNI manager for ""
	I0929 12:24:54.741489 1139323 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0929 12:24:54.741499 1139323 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 12:24:54.741517 1139323 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-782022 NodeName:functional-782022 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 12:24:54.741621 1139323 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-782022"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 12:24:54.741671 1139323 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 12:24:54.752282 1139323 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 12:24:54.752339 1139323 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 12:24:54.761720 1139323 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0929 12:24:54.780007 1139323 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 12:24:54.798287 1139323 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2079 bytes)
	I0929 12:24:54.816525 1139323 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0929 12:24:54.820313 1139323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:24:54.939923 1139323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:24:54.954708 1139323 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022 for IP: 192.168.49.2
	I0929 12:24:54.954721 1139323 certs.go:194] generating shared ca certs ...
	I0929 12:24:54.954735 1139323 certs.go:226] acquiring lock for ca certs: {Name:mk80f04796163f71154dbe6468cabd937b3d9c9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:24:54.954877 1139323 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.key
	I0929 12:24:54.954927 1139323 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/proxy-client-ca.key
	I0929 12:24:54.954936 1139323 certs.go:256] generating profile certs ...
	I0929 12:24:54.955056 1139323 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.key
	I0929 12:24:54.955096 1139323 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/apiserver.key.37b3c445
	I0929 12:24:54.955131 1139323 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/proxy-client.key
	I0929 12:24:54.955241 1139323 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/1101494.pem (1338 bytes)
	W0929 12:24:54.955266 1139323 certs.go:480] ignoring /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/1101494_empty.pem, impossibly tiny 0 bytes
	I0929 12:24:54.955273 1139323 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 12:24:54.955297 1139323 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem (1078 bytes)
	I0929 12:24:54.955314 1139323 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem (1123 bytes)
	I0929 12:24:54.955330 1139323 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/key.pem (1679 bytes)
	I0929 12:24:54.955364 1139323 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem (1708 bytes)
	I0929 12:24:54.956024 1139323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 12:24:54.982793 1139323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I0929 12:24:55.008373 1139323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 12:24:55.033704 1139323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 12:24:55.058879 1139323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0929 12:24:55.084059 1139323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 12:24:55.109293 1139323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 12:24:55.134222 1139323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 12:24:55.161040 1139323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem --> /usr/share/ca-certificates/11014942.pem (1708 bytes)
	I0929 12:24:55.185908 1139323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 12:24:55.211380 1139323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/1101494.pem --> /usr/share/ca-certificates/1101494.pem (1338 bytes)
	I0929 12:24:55.235896 1139323 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 12:24:55.254104 1139323 ssh_runner.go:195] Run: openssl version
	I0929 12:24:55.260254 1139323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11014942.pem && ln -fs /usr/share/ca-certificates/11014942.pem /etc/ssl/certs/11014942.pem"
	I0929 12:24:55.270769 1139323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11014942.pem
	I0929 12:24:55.275139 1139323 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 12:23 /usr/share/ca-certificates/11014942.pem
	I0929 12:24:55.275192 1139323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11014942.pem
	I0929 12:24:55.282304 1139323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11014942.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 12:24:55.292339 1139323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 12:24:55.303117 1139323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:24:55.306929 1139323 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 12:18 /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:24:55.306990 1139323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:24:55.314337 1139323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 12:24:55.324495 1139323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1101494.pem && ln -fs /usr/share/ca-certificates/1101494.pem /etc/ssl/certs/1101494.pem"
	I0929 12:24:55.335452 1139323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1101494.pem
	I0929 12:24:55.339609 1139323 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 12:23 /usr/share/ca-certificates/1101494.pem
	I0929 12:24:55.339653 1139323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1101494.pem
	I0929 12:24:55.347816 1139323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1101494.pem /etc/ssl/certs/51391683.0"
	I0929 12:24:55.358732 1139323 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 12:24:55.362863 1139323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 12:24:55.370135 1139323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 12:24:55.377593 1139323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 12:24:55.385156 1139323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 12:24:55.393182 1139323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 12:24:55.400342 1139323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 12:24:55.407500 1139323 kubeadm.go:392] StartCluster: {Name:functional-782022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-782022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:24:55.407583 1139323 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0929 12:24:55.407636 1139323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 12:24:55.445822 1139323 cri.go:89] found id: "9e2977e9f141f59ef27cc2b7462ad35cfa6d3ad016c3b99648979bf363efeb62"
	I0929 12:24:55.445839 1139323 cri.go:89] found id: "43db49e8ed43f24eb4141339439623039f1c25d6fa08c6a6973f5121b66d3b14"
	I0929 12:24:55.445843 1139323 cri.go:89] found id: "b9d62595b50bfae23428bfa2de2b601791434616380e0a3f11b9c41a46ff551a"
	I0929 12:24:55.445847 1139323 cri.go:89] found id: "4a8d7cb63e108e5508f3caa9c423a89228d2d7dd659718cf9c657cb63b960bd6"
	I0929 12:24:55.445849 1139323 cri.go:89] found id: "f8040ae956a29db65d75560d0d961054428ad3355739826fdfa6c4689553ce6c"
	I0929 12:24:55.445853 1139323 cri.go:89] found id: "780407c1ca2094646cd1ec5e25bfc1d463c56bc314ef2bd1d898b58d8200c5ea"
	I0929 12:24:55.445857 1139323 cri.go:89] found id: "b0de26d17b60cee3ea0ffbb0def9bf68b3b7b8fec1a615de7e7e3b1ffdbf44d3"
	I0929 12:24:55.445859 1139323 cri.go:89] found id: "c72893cda07183eef7eadd6ed0b844f67fdb9a7c8349578dc21bde2e7c064d97"
	I0929 12:24:55.445862 1139323 cri.go:89] found id: "a87bec1b3ee14c32cd766e09316d631522aea0fcba6f24d9c0d707e90c6859a0"
	I0929 12:24:55.445871 1139323 cri.go:89] found id: ""
	I0929 12:24:55.445914 1139323 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0929 12:24:55.473039 1139323 cri.go:116] JSON = [{"ociVersion":"1.2.0","id":"05c745f61eda254e2081e28706ea067153c3523d4073b9f76d02054b1953d55b","pid":2340,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/05c745f61eda254e2081e28706ea067153c3523d4073b9f76d02054b1953d55b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/05c745f61eda254e2081e28706ea067153c3523d4073b9f76d02054b1953d55b/rootfs","created":"2025-09-29T12:24:33.424860113Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"05c745f61eda254e2081e28706ea067153c3523d4073b9f76d02054b1953d55b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-66bc5c9577-zm6rn_f82f62e2-ee5b-4ed5-8f42-8d4f1c782561","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-66bc5c9577-zm6rn","
io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f82f62e2-ee5b-4ed5-8f42-8d4f1c782561"},"owner":"root"},{"ociVersion":"1.2.0","id":"1ab56f1d7c59d7dc3a2f613bbaba1a3356f1b9112d5a6f9fd939bb44db22a7d5","pid":1299,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ab56f1d7c59d7dc3a2f613bbaba1a3356f1b9112d5a6f9fd939bb44db22a7d5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ab56f1d7c59d7dc3a2f613bbaba1a3356f1b9112d5a6f9fd939bb44db22a7d5/rootfs","created":"2025-09-29T12:24:08.402147669Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"1ab56f1d7c59d7dc3a2f613bbaba1a3356f1b9112d5a6f9fd939bb44db22a7d5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-782022_19c302b7004510fb7449403beccf6d69","io.kubernetes.cri.sandb
ox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-782022","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"19c302b7004510fb7449403beccf6d69"},"owner":"root"},{"ociVersion":"1.2.0","id":"43db49e8ed43f24eb4141339439623039f1c25d6fa08c6a6973f5121b66d3b14","pid":2372,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/43db49e8ed43f24eb4141339439623039f1c25d6fa08c6a6973f5121b66d3b14","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/43db49e8ed43f24eb4141339439623039f1c25d6fa08c6a6973f5121b66d3b14/rootfs","created":"2025-09-29T12:24:33.51983334Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.12.1","io.kubernetes.cri.sandbox-id":"05c745f61eda254e2081e28706ea067153c3523d4073b9f76d02054b1953d55b","io.kubernetes.cri.sandbox-name":"coredns-66bc5c9577-zm6rn","io.kubernetes.cri.sandbox-namespace
":"kube-system","io.kubernetes.cri.sandbox-uid":"f82f62e2-ee5b-4ed5-8f42-8d4f1c782561"},"owner":"root"},{"ociVersion":"1.2.0","id":"4a8d7cb63e108e5508f3caa9c423a89228d2d7dd659718cf9c657cb63b960bd6","pid":2006,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a8d7cb63e108e5508f3caa9c423a89228d2d7dd659718cf9c657cb63b960bd6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a8d7cb63e108e5508f3caa9c423a89228d2d7dd659718cf9c657cb63b960bd6/rootfs","created":"2025-09-29T12:24:18.841379842Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20250512-df8de77b","io.kubernetes.cri.sandbox-id":"bbcc18db59f8f558906cd86061475ee0112cba208c7f58e080cee0b56dabdda0","io.kubernetes.cri.sandbox-name":"kindnet-gk4hp","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0bae69e1-0489-4b0e-b23e-a75f66de7799"},"owner":"root"},{"ociVersio
n":"1.2.0","id":"560b48c7f2dd01fa2b340c7a6faa9100ef14367f315adee3eaf0b479d7dbc302","pid":1911,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/560b48c7f2dd01fa2b340c7a6faa9100ef14367f315adee3eaf0b479d7dbc302","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/560b48c7f2dd01fa2b340c7a6faa9100ef14367f315adee3eaf0b479d7dbc302/rootfs","created":"2025-09-29T12:24:18.414360044Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"560b48c7f2dd01fa2b340c7a6faa9100ef14367f315adee3eaf0b479d7dbc302","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-dnlcd_8cda39d0-c10d-4fea-b082-2bb19f79a2ce","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-dnlcd","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"8cda39d0-c10
d-4fea-b082-2bb19f79a2ce"},"owner":"root"},{"ociVersion":"1.2.0","id":"684c3b624bc2278e097312e4a074febf10cf15fc8f05907259df8ae28b256706","pid":1316,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/684c3b624bc2278e097312e4a074febf10cf15fc8f05907259df8ae28b256706","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/684c3b624bc2278e097312e4a074febf10cf15fc8f05907259df8ae28b256706/rootfs","created":"2025-09-29T12:24:08.415919474Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"684c3b624bc2278e097312e4a074febf10cf15fc8f05907259df8ae28b256706","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-782022_eb7c4caaf3ab57536130d0c13704caee","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-functional-782022","io.kubernetes.cri.sandbox-namesp
ace":"kube-system","io.kubernetes.cri.sandbox-uid":"eb7c4caaf3ab57536130d0c13704caee"},"owner":"root"},{"ociVersion":"1.2.0","id":"780407c1ca2094646cd1ec5e25bfc1d463c56bc314ef2bd1d898b58d8200c5ea","pid":1464,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/780407c1ca2094646cd1ec5e25bfc1d463c56bc314ef2bd1d898b58d8200c5ea","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/780407c1ca2094646cd1ec5e25bfc1d463c56bc314ef2bd1d898b58d8200c5ea/rootfs","created":"2025-09-29T12:24:08.547447578Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.0","io.kubernetes.cri.sandbox-id":"b0c619e3ce66f6a404a4526b5e4cd03cc2205c7eca43bba3420e2125ac784c13","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-782022","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f540b9c26efae2f1fac6dd5a6b3b0b81"},"owner":"root"},{
"ociVersion":"1.2.0","id":"81f3483cb1bf7ff94fa88a247064c5729fed7072b9466dda2b2bec4ae5a81a53","pid":2111,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/81f3483cb1bf7ff94fa88a247064c5729fed7072b9466dda2b2bec4ae5a81a53","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/81f3483cb1bf7ff94fa88a247064c5729fed7072b9466dda2b2bec4ae5a81a53/rootfs","created":"2025-09-29T12:24:18.918758457Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"81f3483cb1bf7ff94fa88a247064c5729fed7072b9466dda2b2bec4ae5a81a53","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_eb829926-f269-4be7-ade6-098746403540","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-ui
d":"eb829926-f269-4be7-ade6-098746403540"},"owner":"root"},{"ociVersion":"1.2.0","id":"8d20f6e965d70396ad98edac7a48d22adbad79036f082f0964c44a2facc7de39","pid":1303,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d20f6e965d70396ad98edac7a48d22adbad79036f082f0964c44a2facc7de39","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d20f6e965d70396ad98edac7a48d22adbad79036f082f0964c44a2facc7de39/rootfs","created":"2025-09-29T12:24:08.40902121Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"8d20f6e965d70396ad98edac7a48d22adbad79036f082f0964c44a2facc7de39","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-782022_6ebfc97bb9fc45590e1c8b0d6aa7638c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager
-functional-782022","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"6ebfc97bb9fc45590e1c8b0d6aa7638c"},"owner":"root"},{"ociVersion":"1.2.0","id":"9e2977e9f141f59ef27cc2b7462ad35cfa6d3ad016c3b99648979bf363efeb62","pid":3441,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e2977e9f141f59ef27cc2b7462ad35cfa6d3ad016c3b99648979bf363efeb62","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e2977e9f141f59ef27cc2b7462ad35cfa6d3ad016c3b99648979bf363efeb62/rootfs","created":"2025-09-29T12:24:49.511211194Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"81f3483cb1bf7ff94fa88a247064c5729fed7072b9466dda2b2bec4ae5a81a53","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"eb8
29926-f269-4be7-ade6-098746403540"},"owner":"root"},{"ociVersion":"1.2.0","id":"a87bec1b3ee14c32cd766e09316d631522aea0fcba6f24d9c0d707e90c6859a0","pid":1431,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a87bec1b3ee14c32cd766e09316d631522aea0fcba6f24d9c0d707e90c6859a0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a87bec1b3ee14c32cd766e09316d631522aea0fcba6f24d9c0d707e90c6859a0/rootfs","created":"2025-09-29T12:24:08.523120569Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.0","io.kubernetes.cri.sandbox-id":"1ab56f1d7c59d7dc3a2f613bbaba1a3356f1b9112d5a6f9fd939bb44db22a7d5","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-782022","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"19c302b7004510fb7449403beccf6d69"},"owner":"root"},{"ociVersion":"1.2.0","id":"b0c619e3ce66f6a404a4526b
5e4cd03cc2205c7eca43bba3420e2125ac784c13","pid":1324,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b0c619e3ce66f6a404a4526b5e4cd03cc2205c7eca43bba3420e2125ac784c13","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b0c619e3ce66f6a404a4526b5e4cd03cc2205c7eca43bba3420e2125ac784c13/rootfs","created":"2025-09-29T12:24:08.419109228Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"b0c619e3ce66f6a404a4526b5e4cd03cc2205c7eca43bba3420e2125ac784c13","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-782022_f540b9c26efae2f1fac6dd5a6b3b0b81","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-782022","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f540b9c26efae2f1fac6dd5
a6b3b0b81"},"owner":"root"},{"ociVersion":"1.2.0","id":"b0de26d17b60cee3ea0ffbb0def9bf68b3b7b8fec1a615de7e7e3b1ffdbf44d3","pid":1450,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b0de26d17b60cee3ea0ffbb0def9bf68b3b7b8fec1a615de7e7e3b1ffdbf44d3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b0de26d17b60cee3ea0ffbb0def9bf68b3b7b8fec1a615de7e7e3b1ffdbf44d3/rootfs","created":"2025-09-29T12:24:08.532608098Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"684c3b624bc2278e097312e4a074febf10cf15fc8f05907259df8ae28b256706","io.kubernetes.cri.sandbox-name":"etcd-functional-782022","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"eb7c4caaf3ab57536130d0c13704caee"},"owner":"root"},{"ociVersion":"1.2.0","id":"bbcc18db59f8f558906cd86061475ee0112cba208c7f58e080cee0b56dabdda0","pid":1904,"
status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bbcc18db59f8f558906cd86061475ee0112cba208c7f58e080cee0b56dabdda0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bbcc18db59f8f558906cd86061475ee0112cba208c7f58e080cee0b56dabdda0/rootfs","created":"2025-09-29T12:24:18.438503201Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"bbcc18db59f8f558906cd86061475ee0112cba208c7f58e080cee0b56dabdda0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-gk4hp_0bae69e1-0489-4b0e-b23e-a75f66de7799","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-gk4hp","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0bae69e1-0489-4b0e-b23e-a75f66de7799"},"owner":"root"},{"ociVersion":"1.2.0","id":"c72893cda07183eef
7eadd6ed0b844f67fdb9a7c8349578dc21bde2e7c064d97","pid":1452,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c72893cda07183eef7eadd6ed0b844f67fdb9a7c8349578dc21bde2e7c064d97","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c72893cda07183eef7eadd6ed0b844f67fdb9a7c8349578dc21bde2e7c064d97/rootfs","created":"2025-09-29T12:24:08.539997835Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.0","io.kubernetes.cri.sandbox-id":"8d20f6e965d70396ad98edac7a48d22adbad79036f082f0964c44a2facc7de39","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-782022","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"6ebfc97bb9fc45590e1c8b0d6aa7638c"},"owner":"root"},{"ociVersion":"1.2.0","id":"f8040ae956a29db65d75560d0d961054428ad3355739826fdfa6c4689553ce6c","pid":1944,"status":"running
","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f8040ae956a29db65d75560d0d961054428ad3355739826fdfa6c4689553ce6c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f8040ae956a29db65d75560d0d961054428ad3355739826fdfa6c4689553ce6c/rootfs","created":"2025-09-29T12:24:18.525004714Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.34.0","io.kubernetes.cri.sandbox-id":"560b48c7f2dd01fa2b340c7a6faa9100ef14367f315adee3eaf0b479d7dbc302","io.kubernetes.cri.sandbox-name":"kube-proxy-dnlcd","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"8cda39d0-c10d-4fea-b082-2bb19f79a2ce"},"owner":"root"}]
	I0929 12:24:55.473301 1139323 cri.go:126] list returned 16 containers
	I0929 12:24:55.473309 1139323 cri.go:129] container: {ID:05c745f61eda254e2081e28706ea067153c3523d4073b9f76d02054b1953d55b Status:running}
	I0929 12:24:55.473323 1139323 cri.go:131] skipping 05c745f61eda254e2081e28706ea067153c3523d4073b9f76d02054b1953d55b - not in ps
	I0929 12:24:55.473325 1139323 cri.go:129] container: {ID:1ab56f1d7c59d7dc3a2f613bbaba1a3356f1b9112d5a6f9fd939bb44db22a7d5 Status:running}
	I0929 12:24:55.473328 1139323 cri.go:131] skipping 1ab56f1d7c59d7dc3a2f613bbaba1a3356f1b9112d5a6f9fd939bb44db22a7d5 - not in ps
	I0929 12:24:55.473331 1139323 cri.go:129] container: {ID:43db49e8ed43f24eb4141339439623039f1c25d6fa08c6a6973f5121b66d3b14 Status:running}
	I0929 12:24:55.473336 1139323 cri.go:135] skipping {43db49e8ed43f24eb4141339439623039f1c25d6fa08c6a6973f5121b66d3b14 running}: state = "running", want "paused"
	I0929 12:24:55.473345 1139323 cri.go:129] container: {ID:4a8d7cb63e108e5508f3caa9c423a89228d2d7dd659718cf9c657cb63b960bd6 Status:running}
	I0929 12:24:55.473348 1139323 cri.go:135] skipping {4a8d7cb63e108e5508f3caa9c423a89228d2d7dd659718cf9c657cb63b960bd6 running}: state = "running", want "paused"
	I0929 12:24:55.473352 1139323 cri.go:129] container: {ID:560b48c7f2dd01fa2b340c7a6faa9100ef14367f315adee3eaf0b479d7dbc302 Status:running}
	I0929 12:24:55.473356 1139323 cri.go:131] skipping 560b48c7f2dd01fa2b340c7a6faa9100ef14367f315adee3eaf0b479d7dbc302 - not in ps
	I0929 12:24:55.473359 1139323 cri.go:129] container: {ID:684c3b624bc2278e097312e4a074febf10cf15fc8f05907259df8ae28b256706 Status:running}
	I0929 12:24:55.473361 1139323 cri.go:131] skipping 684c3b624bc2278e097312e4a074febf10cf15fc8f05907259df8ae28b256706 - not in ps
	I0929 12:24:55.473364 1139323 cri.go:129] container: {ID:780407c1ca2094646cd1ec5e25bfc1d463c56bc314ef2bd1d898b58d8200c5ea Status:running}
	I0929 12:24:55.473368 1139323 cri.go:135] skipping {780407c1ca2094646cd1ec5e25bfc1d463c56bc314ef2bd1d898b58d8200c5ea running}: state = "running", want "paused"
	I0929 12:24:55.473371 1139323 cri.go:129] container: {ID:81f3483cb1bf7ff94fa88a247064c5729fed7072b9466dda2b2bec4ae5a81a53 Status:running}
	I0929 12:24:55.473375 1139323 cri.go:131] skipping 81f3483cb1bf7ff94fa88a247064c5729fed7072b9466dda2b2bec4ae5a81a53 - not in ps
	I0929 12:24:55.473377 1139323 cri.go:129] container: {ID:8d20f6e965d70396ad98edac7a48d22adbad79036f082f0964c44a2facc7de39 Status:running}
	I0929 12:24:55.473380 1139323 cri.go:131] skipping 8d20f6e965d70396ad98edac7a48d22adbad79036f082f0964c44a2facc7de39 - not in ps
	I0929 12:24:55.473382 1139323 cri.go:129] container: {ID:9e2977e9f141f59ef27cc2b7462ad35cfa6d3ad016c3b99648979bf363efeb62 Status:running}
	I0929 12:24:55.473386 1139323 cri.go:135] skipping {9e2977e9f141f59ef27cc2b7462ad35cfa6d3ad016c3b99648979bf363efeb62 running}: state = "running", want "paused"
	I0929 12:24:55.473389 1139323 cri.go:129] container: {ID:a87bec1b3ee14c32cd766e09316d631522aea0fcba6f24d9c0d707e90c6859a0 Status:running}
	I0929 12:24:55.473393 1139323 cri.go:135] skipping {a87bec1b3ee14c32cd766e09316d631522aea0fcba6f24d9c0d707e90c6859a0 running}: state = "running", want "paused"
	I0929 12:24:55.473395 1139323 cri.go:129] container: {ID:b0c619e3ce66f6a404a4526b5e4cd03cc2205c7eca43bba3420e2125ac784c13 Status:running}
	I0929 12:24:55.473399 1139323 cri.go:131] skipping b0c619e3ce66f6a404a4526b5e4cd03cc2205c7eca43bba3420e2125ac784c13 - not in ps
	I0929 12:24:55.473402 1139323 cri.go:129] container: {ID:b0de26d17b60cee3ea0ffbb0def9bf68b3b7b8fec1a615de7e7e3b1ffdbf44d3 Status:running}
	I0929 12:24:55.473405 1139323 cri.go:135] skipping {b0de26d17b60cee3ea0ffbb0def9bf68b3b7b8fec1a615de7e7e3b1ffdbf44d3 running}: state = "running", want "paused"
	I0929 12:24:55.473407 1139323 cri.go:129] container: {ID:bbcc18db59f8f558906cd86061475ee0112cba208c7f58e080cee0b56dabdda0 Status:running}
	I0929 12:24:55.473411 1139323 cri.go:131] skipping bbcc18db59f8f558906cd86061475ee0112cba208c7f58e080cee0b56dabdda0 - not in ps
	I0929 12:24:55.473413 1139323 cri.go:129] container: {ID:c72893cda07183eef7eadd6ed0b844f67fdb9a7c8349578dc21bde2e7c064d97 Status:running}
	I0929 12:24:55.473417 1139323 cri.go:135] skipping {c72893cda07183eef7eadd6ed0b844f67fdb9a7c8349578dc21bde2e7c064d97 running}: state = "running", want "paused"
	I0929 12:24:55.473419 1139323 cri.go:129] container: {ID:f8040ae956a29db65d75560d0d961054428ad3355739826fdfa6c4689553ce6c Status:running}
	I0929 12:24:55.473422 1139323 cri.go:135] skipping {f8040ae956a29db65d75560d0d961054428ad3355739826fdfa6c4689553ce6c running}: state = "running", want "paused"
	I0929 12:24:55.473474 1139323 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 12:24:55.484016 1139323 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 12:24:55.484026 1139323 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 12:24:55.484069 1139323 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 12:24:55.493707 1139323 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 12:24:55.494333 1139323 kubeconfig.go:125] found "functional-782022" server: "https://192.168.49.2:8441"
	I0929 12:24:55.496208 1139323 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 12:24:55.505822 1139323 kubeadm.go:636] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-09-29 12:24:04.364660788 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-09-29 12:24:54.813891323 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I0929 12:24:55.505834 1139323 kubeadm.go:1152] stopping kube-system containers ...
	I0929 12:24:55.505855 1139323 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0929 12:24:55.505901 1139323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 12:24:55.543185 1139323 cri.go:89] found id: "9e2977e9f141f59ef27cc2b7462ad35cfa6d3ad016c3b99648979bf363efeb62"
	I0929 12:24:55.543198 1139323 cri.go:89] found id: "43db49e8ed43f24eb4141339439623039f1c25d6fa08c6a6973f5121b66d3b14"
	I0929 12:24:55.543201 1139323 cri.go:89] found id: "b9d62595b50bfae23428bfa2de2b601791434616380e0a3f11b9c41a46ff551a"
	I0929 12:24:55.543204 1139323 cri.go:89] found id: "4a8d7cb63e108e5508f3caa9c423a89228d2d7dd659718cf9c657cb63b960bd6"
	I0929 12:24:55.543206 1139323 cri.go:89] found id: "f8040ae956a29db65d75560d0d961054428ad3355739826fdfa6c4689553ce6c"
	I0929 12:24:55.543208 1139323 cri.go:89] found id: "780407c1ca2094646cd1ec5e25bfc1d463c56bc314ef2bd1d898b58d8200c5ea"
	I0929 12:24:55.543209 1139323 cri.go:89] found id: "b0de26d17b60cee3ea0ffbb0def9bf68b3b7b8fec1a615de7e7e3b1ffdbf44d3"
	I0929 12:24:55.543211 1139323 cri.go:89] found id: "c72893cda07183eef7eadd6ed0b844f67fdb9a7c8349578dc21bde2e7c064d97"
	I0929 12:24:55.543212 1139323 cri.go:89] found id: "a87bec1b3ee14c32cd766e09316d631522aea0fcba6f24d9c0d707e90c6859a0"
	I0929 12:24:55.543220 1139323 cri.go:89] found id: ""
	I0929 12:24:55.543224 1139323 cri.go:252] Stopping containers: [9e2977e9f141f59ef27cc2b7462ad35cfa6d3ad016c3b99648979bf363efeb62 43db49e8ed43f24eb4141339439623039f1c25d6fa08c6a6973f5121b66d3b14 b9d62595b50bfae23428bfa2de2b601791434616380e0a3f11b9c41a46ff551a 4a8d7cb63e108e5508f3caa9c423a89228d2d7dd659718cf9c657cb63b960bd6 f8040ae956a29db65d75560d0d961054428ad3355739826fdfa6c4689553ce6c 780407c1ca2094646cd1ec5e25bfc1d463c56bc314ef2bd1d898b58d8200c5ea b0de26d17b60cee3ea0ffbb0def9bf68b3b7b8fec1a615de7e7e3b1ffdbf44d3 c72893cda07183eef7eadd6ed0b844f67fdb9a7c8349578dc21bde2e7c064d97 a87bec1b3ee14c32cd766e09316d631522aea0fcba6f24d9c0d707e90c6859a0]
	I0929 12:24:55.543277 1139323 ssh_runner.go:195] Run: which crictl
	I0929 12:24:55.547244 1139323 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 9e2977e9f141f59ef27cc2b7462ad35cfa6d3ad016c3b99648979bf363efeb62 43db49e8ed43f24eb4141339439623039f1c25d6fa08c6a6973f5121b66d3b14 b9d62595b50bfae23428bfa2de2b601791434616380e0a3f11b9c41a46ff551a 4a8d7cb63e108e5508f3caa9c423a89228d2d7dd659718cf9c657cb63b960bd6 f8040ae956a29db65d75560d0d961054428ad3355739826fdfa6c4689553ce6c 780407c1ca2094646cd1ec5e25bfc1d463c56bc314ef2bd1d898b58d8200c5ea b0de26d17b60cee3ea0ffbb0def9bf68b3b7b8fec1a615de7e7e3b1ffdbf44d3 c72893cda07183eef7eadd6ed0b844f67fdb9a7c8349578dc21bde2e7c064d97 a87bec1b3ee14c32cd766e09316d631522aea0fcba6f24d9c0d707e90c6859a0
	I0929 12:25:11.033993 1139323 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 9e2977e9f141f59ef27cc2b7462ad35cfa6d3ad016c3b99648979bf363efeb62 43db49e8ed43f24eb4141339439623039f1c25d6fa08c6a6973f5121b66d3b14 b9d62595b50bfae23428bfa2de2b601791434616380e0a3f11b9c41a46ff551a 4a8d7cb63e108e5508f3caa9c423a89228d2d7dd659718cf9c657cb63b960bd6 f8040ae956a29db65d75560d0d961054428ad3355739826fdfa6c4689553ce6c 780407c1ca2094646cd1ec5e25bfc1d463c56bc314ef2bd1d898b58d8200c5ea b0de26d17b60cee3ea0ffbb0def9bf68b3b7b8fec1a615de7e7e3b1ffdbf44d3 c72893cda07183eef7eadd6ed0b844f67fdb9a7c8349578dc21bde2e7c064d97 a87bec1b3ee14c32cd766e09316d631522aea0fcba6f24d9c0d707e90c6859a0: (15.486697874s)
	I0929 12:25:11.034065 1139323 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0929 12:25:11.079884 1139323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 12:25:11.090538 1139323 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Sep 29 12:24 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Sep 29 12:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Sep 29 12:24 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Sep 29 12:24 /etc/kubernetes/scheduler.conf
	
	I0929 12:25:11.090595 1139323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0929 12:25:11.100485 1139323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0929 12:25:11.111056 1139323 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0929 12:25:11.111119 1139323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 12:25:11.120906 1139323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0929 12:25:11.131199 1139323 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0929 12:25:11.131259 1139323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 12:25:11.141062 1139323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0929 12:25:11.150854 1139323 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0929 12:25:11.150909 1139323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 12:25:11.160728 1139323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 12:25:11.171318 1139323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 12:25:11.216899 1139323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 12:25:12.311900 1139323 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.094970102s)
	I0929 12:25:12.311920 1139323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0929 12:25:12.488084 1139323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 12:25:12.541679 1139323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0929 12:25:12.601846 1139323 api_server.go:52] waiting for apiserver process to appear ...
	I0929 12:25:12.601920 1139323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:25:13.102099 1139323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:25:13.602827 1139323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:25:13.617271 1139323 api_server.go:72] duration metric: took 1.015429283s to wait for apiserver process to appear ...
	I0929 12:25:13.617297 1139323 api_server.go:88] waiting for apiserver healthz status ...
	I0929 12:25:13.617319 1139323 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 12:25:14.590167 1139323 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0929 12:25:14.590189 1139323 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0929 12:25:14.590207 1139323 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 12:25:14.621925 1139323 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0929 12:25:14.621946 1139323 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0929 12:25:14.621975 1139323 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 12:25:14.627983 1139323 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:25:14.628006 1139323 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:25:15.117466 1139323 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 12:25:15.121542 1139323 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:25:15.121558 1139323 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:25:15.618206 1139323 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 12:25:15.623580 1139323 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:25:15.623602 1139323 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:25:16.118321 1139323 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 12:25:16.122897 1139323 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0929 12:25:16.130113 1139323 api_server.go:141] control plane version: v1.34.0
	I0929 12:25:16.130136 1139323 api_server.go:131] duration metric: took 2.512833259s to wait for apiserver health ...
	I0929 12:25:16.130145 1139323 cni.go:84] Creating CNI manager for ""
	I0929 12:25:16.130150 1139323 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0929 12:25:16.132173 1139323 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0929 12:25:16.133392 1139323 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0929 12:25:16.137925 1139323 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0929 12:25:16.137937 1139323 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0929 12:25:16.158058 1139323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0929 12:25:16.459350 1139323 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 12:25:16.463402 1139323 system_pods.go:59] 8 kube-system pods found
	I0929 12:25:16.463430 1139323 system_pods.go:61] "coredns-66bc5c9577-zm6rn" [f82f62e2-ee5b-4ed5-8f42-8d4f1c782561] Running
	I0929 12:25:16.463438 1139323 system_pods.go:61] "etcd-functional-782022" [e4700d7f-9d76-402e-b494-2fd61fdc38ae] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:25:16.463441 1139323 system_pods.go:61] "kindnet-gk4hp" [0bae69e1-0489-4b0e-b23e-a75f66de7799] Running
	I0929 12:25:16.463446 1139323 system_pods.go:61] "kube-apiserver-functional-782022" [2079a7cd-a81d-482d-8b06-c9c61d7b44bb] Pending
	I0929 12:25:16.463458 1139323 system_pods.go:61] "kube-controller-manager-functional-782022" [3ff4895b-0c6f-4aed-8be8-fa8c6e2477b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:25:16.463461 1139323 system_pods.go:61] "kube-proxy-dnlcd" [8cda39d0-c10d-4fea-b082-2bb19f79a2ce] Running
	I0929 12:25:16.463467 1139323 system_pods.go:61] "kube-scheduler-functional-782022" [1e1b692f-15ff-454f-a10a-c1ecb437aae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:25:16.463470 1139323 system_pods.go:61] "storage-provisioner" [eb829926-f269-4be7-ade6-098746403540] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 12:25:16.463476 1139323 system_pods.go:74] duration metric: took 4.114153ms to wait for pod list to return data ...
	I0929 12:25:16.463488 1139323 node_conditions.go:102] verifying NodePressure condition ...
	I0929 12:25:16.466540 1139323 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 12:25:16.466558 1139323 node_conditions.go:123] node cpu capacity is 8
	I0929 12:25:16.466569 1139323 node_conditions.go:105] duration metric: took 3.078009ms to run NodePressure ...
	I0929 12:25:16.466587 1139323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 12:25:16.716061 1139323 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0929 12:25:16.718714 1139323 kubeadm.go:735] kubelet initialised
	I0929 12:25:16.718724 1139323 kubeadm.go:736] duration metric: took 2.649712ms waiting for restarted kubelet to initialise ...
	I0929 12:25:16.718740 1139323 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 12:25:16.727672 1139323 ops.go:34] apiserver oom_adj: -16
	I0929 12:25:16.727687 1139323 kubeadm.go:593] duration metric: took 21.243654985s to restartPrimaryControlPlane
	I0929 12:25:16.727698 1139323 kubeadm.go:394] duration metric: took 21.320223747s to StartCluster
	I0929 12:25:16.727721 1139323 settings.go:142] acquiring lock: {Name:mk967ab7b412f5ea13a8bdbc3d08e00d0ec4417f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:25:16.727786 1139323 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 12:25:16.728370 1139323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/kubeconfig: {Name:mk343611c88fd6ad36810bb377f9a0ca463784db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:25:16.728631 1139323 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0929 12:25:16.728702 1139323 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 12:25:16.728783 1139323 addons.go:69] Setting storage-provisioner=true in profile "functional-782022"
	I0929 12:25:16.728800 1139323 addons.go:238] Setting addon storage-provisioner=true in "functional-782022"
	W0929 12:25:16.728807 1139323 addons.go:247] addon storage-provisioner should already be in state true
	I0929 12:25:16.728820 1139323 addons.go:69] Setting default-storageclass=true in profile "functional-782022"
	I0929 12:25:16.728838 1139323 host.go:66] Checking if "functional-782022" exists ...
	I0929 12:25:16.728846 1139323 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-782022"
	I0929 12:25:16.728889 1139323 config.go:182] Loaded profile config "functional-782022": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 12:25:16.729141 1139323 cli_runner.go:164] Run: docker container inspect functional-782022 --format={{.State.Status}}
	I0929 12:25:16.729223 1139323 cli_runner.go:164] Run: docker container inspect functional-782022 --format={{.State.Status}}
	I0929 12:25:16.730508 1139323 out.go:179] * Verifying Kubernetes components...
	I0929 12:25:16.731934 1139323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:25:16.749936 1139323 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 12:25:16.750460 1139323 addons.go:238] Setting addon default-storageclass=true in "functional-782022"
	W0929 12:25:16.750473 1139323 addons.go:247] addon default-storageclass should already be in state true
	I0929 12:25:16.750509 1139323 host.go:66] Checking if "functional-782022" exists ...
	I0929 12:25:16.751003 1139323 cli_runner.go:164] Run: docker container inspect functional-782022 --format={{.State.Status}}
	I0929 12:25:16.751297 1139323 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:25:16.751310 1139323 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 12:25:16.751369 1139323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-782022
	I0929 12:25:16.781606 1139323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33276 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/functional-782022/id_rsa Username:docker}
	I0929 12:25:16.783144 1139323 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 12:25:16.783160 1139323 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 12:25:16.783217 1139323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-782022
	I0929 12:25:16.802017 1139323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33276 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/functional-782022/id_rsa Username:docker}
	I0929 12:25:16.868318 1139323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:25:16.882650 1139323 node_ready.go:35] waiting up to 6m0s for node "functional-782022" to be "Ready" ...
	I0929 12:25:16.885335 1139323 node_ready.go:49] node "functional-782022" is "Ready"
	I0929 12:25:16.885351 1139323 node_ready.go:38] duration metric: took 2.660405ms for node "functional-782022" to be "Ready" ...
	I0929 12:25:16.885364 1139323 api_server.go:52] waiting for apiserver process to appear ...
	I0929 12:25:16.885415 1139323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:25:16.895792 1139323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:25:16.899629 1139323 api_server.go:72] duration metric: took 170.965313ms to wait for apiserver process to appear ...
	I0929 12:25:16.899648 1139323 api_server.go:88] waiting for apiserver healthz status ...
	I0929 12:25:16.899675 1139323 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 12:25:16.905698 1139323 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0929 12:25:16.906858 1139323 api_server.go:141] control plane version: v1.34.0
	I0929 12:25:16.906876 1139323 api_server.go:131] duration metric: took 7.220779ms to wait for apiserver health ...
	I0929 12:25:16.906887 1139323 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 12:25:16.910218 1139323 system_pods.go:59] 8 kube-system pods found
	I0929 12:25:16.910239 1139323 system_pods.go:61] "coredns-66bc5c9577-zm6rn" [f82f62e2-ee5b-4ed5-8f42-8d4f1c782561] Running
	I0929 12:25:16.910245 1139323 system_pods.go:61] "etcd-functional-782022" [e4700d7f-9d76-402e-b494-2fd61fdc38ae] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:25:16.910249 1139323 system_pods.go:61] "kindnet-gk4hp" [0bae69e1-0489-4b0e-b23e-a75f66de7799] Running
	I0929 12:25:16.910255 1139323 system_pods.go:61] "kube-apiserver-functional-782022" [2079a7cd-a81d-482d-8b06-c9c61d7b44bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:25:16.910261 1139323 system_pods.go:61] "kube-controller-manager-functional-782022" [3ff4895b-0c6f-4aed-8be8-fa8c6e2477b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:25:16.910264 1139323 system_pods.go:61] "kube-proxy-dnlcd" [8cda39d0-c10d-4fea-b082-2bb19f79a2ce] Running
	I0929 12:25:16.910269 1139323 system_pods.go:61] "kube-scheduler-functional-782022" [1e1b692f-15ff-454f-a10a-c1ecb437aae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:25:16.910271 1139323 system_pods.go:61] "storage-provisioner" [eb829926-f269-4be7-ade6-098746403540] Running
	I0929 12:25:16.910277 1139323 system_pods.go:74] duration metric: took 3.38476ms to wait for pod list to return data ...
	I0929 12:25:16.910284 1139323 default_sa.go:34] waiting for default service account to be created ...
	I0929 12:25:16.913069 1139323 default_sa.go:45] found service account: "default"
	I0929 12:25:16.913084 1139323 default_sa.go:55] duration metric: took 2.793596ms for default service account to be created ...
	I0929 12:25:16.913095 1139323 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 12:25:16.916066 1139323 system_pods.go:86] 8 kube-system pods found
	I0929 12:25:16.916082 1139323 system_pods.go:89] "coredns-66bc5c9577-zm6rn" [f82f62e2-ee5b-4ed5-8f42-8d4f1c782561] Running
	I0929 12:25:16.916089 1139323 system_pods.go:89] "etcd-functional-782022" [e4700d7f-9d76-402e-b494-2fd61fdc38ae] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:25:16.916100 1139323 system_pods.go:89] "kindnet-gk4hp" [0bae69e1-0489-4b0e-b23e-a75f66de7799] Running
	I0929 12:25:16.916107 1139323 system_pods.go:89] "kube-apiserver-functional-782022" [2079a7cd-a81d-482d-8b06-c9c61d7b44bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:25:16.916114 1139323 system_pods.go:89] "kube-controller-manager-functional-782022" [3ff4895b-0c6f-4aed-8be8-fa8c6e2477b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:25:16.916117 1139323 system_pods.go:89] "kube-proxy-dnlcd" [8cda39d0-c10d-4fea-b082-2bb19f79a2ce] Running
	I0929 12:25:16.916124 1139323 system_pods.go:89] "kube-scheduler-functional-782022" [1e1b692f-15ff-454f-a10a-c1ecb437aae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:25:16.916126 1139323 system_pods.go:89] "storage-provisioner" [eb829926-f269-4be7-ade6-098746403540] Running
	I0929 12:25:16.916133 1139323 system_pods.go:126] duration metric: took 3.033016ms to wait for k8s-apps to be running ...
	I0929 12:25:16.916139 1139323 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 12:25:16.916204 1139323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:25:16.916442 1139323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 12:25:17.390547 1139323 system_svc.go:56] duration metric: took 474.396531ms WaitForService to wait for kubelet
	I0929 12:25:17.390566 1139323 kubeadm.go:578] duration metric: took 661.910228ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 12:25:17.390583 1139323 node_conditions.go:102] verifying NodePressure condition ...
	I0929 12:25:17.393114 1139323 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 12:25:17.393129 1139323 node_conditions.go:123] node cpu capacity is 8
	I0929 12:25:17.393142 1139323 node_conditions.go:105] duration metric: took 2.554627ms to run NodePressure ...
	I0929 12:25:17.393154 1139323 start.go:241] waiting for startup goroutines ...
	I0929 12:25:17.398340 1139323 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0929 12:25:17.399393 1139323 addons.go:514] duration metric: took 670.710315ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0929 12:25:17.399424 1139323 start.go:246] waiting for cluster config update ...
	I0929 12:25:17.399435 1139323 start.go:255] writing updated cluster config ...
	I0929 12:25:17.399671 1139323 ssh_runner.go:195] Run: rm -f paused
	I0929 12:25:17.404578 1139323 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:25:17.407809 1139323 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zm6rn" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:25:17.412045 1139323 pod_ready.go:94] pod "coredns-66bc5c9577-zm6rn" is "Ready"
	I0929 12:25:17.412058 1139323 pod_ready.go:86] duration metric: took 4.235171ms for pod "coredns-66bc5c9577-zm6rn" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:25:17.414072 1139323 pod_ready.go:83] waiting for pod "etcd-functional-782022" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 12:25:19.419716 1139323 pod_ready.go:104] pod "etcd-functional-782022" is not "Ready", error: <nil>
	W0929 12:25:21.919939 1139323 pod_ready.go:104] pod "etcd-functional-782022" is not "Ready", error: <nil>
	I0929 12:25:23.919991 1139323 pod_ready.go:94] pod "etcd-functional-782022" is "Ready"
	I0929 12:25:23.920026 1139323 pod_ready.go:86] duration metric: took 6.505926982s for pod "etcd-functional-782022" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:25:23.922410 1139323 pod_ready.go:83] waiting for pod "kube-apiserver-functional-782022" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 12:25:25.928381 1139323 pod_ready.go:104] pod "kube-apiserver-functional-782022" is not "Ready", error: <nil>
	W0929 12:25:28.427924 1139323 pod_ready.go:104] pod "kube-apiserver-functional-782022" is not "Ready", error: <nil>
	I0929 12:25:29.428288 1139323 pod_ready.go:94] pod "kube-apiserver-functional-782022" is "Ready"
	I0929 12:25:29.428317 1139323 pod_ready.go:86] duration metric: took 5.505882701s for pod "kube-apiserver-functional-782022" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:25:29.430838 1139323 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-782022" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 12:25:31.436892 1139323 pod_ready.go:104] pod "kube-controller-manager-functional-782022" is not "Ready", error: <nil>
	I0929 12:25:32.436445 1139323 pod_ready.go:94] pod "kube-controller-manager-functional-782022" is "Ready"
	I0929 12:25:32.436463 1139323 pod_ready.go:86] duration metric: took 3.005610212s for pod "kube-controller-manager-functional-782022" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:25:32.438506 1139323 pod_ready.go:83] waiting for pod "kube-proxy-dnlcd" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:25:32.442490 1139323 pod_ready.go:94] pod "kube-proxy-dnlcd" is "Ready"
	I0929 12:25:32.442502 1139323 pod_ready.go:86] duration metric: took 3.986569ms for pod "kube-proxy-dnlcd" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:25:32.444677 1139323 pod_ready.go:83] waiting for pod "kube-scheduler-functional-782022" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:25:32.448524 1139323 pod_ready.go:94] pod "kube-scheduler-functional-782022" is "Ready"
	I0929 12:25:32.448535 1139323 pod_ready.go:86] duration metric: took 3.847563ms for pod "kube-scheduler-functional-782022" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:25:32.448543 1139323 pod_ready.go:40] duration metric: took 15.043942108s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:25:32.494506 1139323 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 12:25:32.496447 1139323 out.go:179] * Done! kubectl is now configured to use "functional-782022" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	897d454b93764       6e38f40d628db       6 minutes ago       Running             storage-provisioner       3                   81f3483cb1bf7       storage-provisioner
	bcc5e572f0ecc       90550c43ad2bc       6 minutes ago       Running             kube-apiserver            0                   1eff7df7d085a       kube-apiserver-functional-782022
	f45ad0c405faa       46169d968e920       6 minutes ago       Running             kube-scheduler            1                   1ab56f1d7c59d       kube-scheduler-functional-782022
	f2c5ba8dccddf       5f1f5298c888d       6 minutes ago       Running             etcd                      1                   684c3b624bc22       etcd-functional-782022
	11011c88597fc       a0af72f2ec6d6       6 minutes ago       Running             kube-controller-manager   1                   8d20f6e965d70       kube-controller-manager-functional-782022
	1a1e77b489c91       6e38f40d628db       6 minutes ago       Exited              storage-provisioner       2                   81f3483cb1bf7       storage-provisioner
	0bbcc10ce4fbd       52546a367cc9e       6 minutes ago       Running             coredns                   1                   05c745f61eda2       coredns-66bc5c9577-zm6rn
	7184ba4a9a391       409467f978b4a       6 minutes ago       Running             kindnet-cni               1                   bbcc18db59f8f       kindnet-gk4hp
	7def58db713d2       df0860106674d       6 minutes ago       Running             kube-proxy                1                   560b48c7f2dd0       kube-proxy-dnlcd
	43db49e8ed43f       52546a367cc9e       7 minutes ago       Exited              coredns                   0                   05c745f61eda2       coredns-66bc5c9577-zm6rn
	4a8d7cb63e108       409467f978b4a       7 minutes ago       Exited              kindnet-cni               0                   bbcc18db59f8f       kindnet-gk4hp
	f8040ae956a29       df0860106674d       7 minutes ago       Exited              kube-proxy                0                   560b48c7f2dd0       kube-proxy-dnlcd
	b0de26d17b60c       5f1f5298c888d       7 minutes ago       Exited              etcd                      0                   684c3b624bc22       etcd-functional-782022
	c72893cda0718       a0af72f2ec6d6       7 minutes ago       Exited              kube-controller-manager   0                   8d20f6e965d70       kube-controller-manager-functional-782022
	a87bec1b3ee14       46169d968e920       7 minutes ago       Exited              kube-scheduler            0                   1ab56f1d7c59d       kube-scheduler-functional-782022
	
	
	==> containerd <==
	Sep 29 12:31:19 functional-782022 containerd[3890]: time="2025-09-29T12:31:19.620280271Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-782022\""
	Sep 29 12:31:19 functional-782022 containerd[3890]: time="2025-09-29T12:31:19.624270052Z" level=info msg="ImageCreate event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 29 12:31:19 functional-782022 containerd[3890]: time="2025-09-29T12:31:19.625189660Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-782022\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 29 12:31:22 functional-782022 containerd[3890]: time="2025-09-29T12:31:22.645168958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:mysql-5bb876957f-2z7m2,Uid:e3e3d6a7-8d43-48c1-8a0e-c35b42f327b4,Namespace:default,Attempt:0,}"
	Sep 29 12:31:22 functional-782022 containerd[3890]: time="2025-09-29T12:31:22.751503636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:mysql-5bb876957f-2z7m2,Uid:e3e3d6a7-8d43-48c1-8a0e-c35b42f327b4,Namespace:default,Attempt:0,} returns sandbox id \"aea146d481836c1053645334f8df8494e67fa1bef13ad9860ebbd3a65f6fb8b4\""
	Sep 29 12:31:22 functional-782022 containerd[3890]: time="2025-09-29T12:31:22.753338141Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	Sep 29 12:31:22 functional-782022 containerd[3890]: time="2025-09-29T12:31:22.754753563Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:31:23 functional-782022 containerd[3890]: time="2025-09-29T12:31:23.416300618Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:31:25 functional-782022 containerd[3890]: time="2025-09-29T12:31:25.277563488Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:31:25 functional-782022 containerd[3890]: time="2025-09-29T12:31:25.277621364Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=10967"
	Sep 29 12:31:35 functional-782022 containerd[3890]: time="2025-09-29T12:31:35.581818317Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Sep 29 12:31:35 functional-782022 containerd[3890]: time="2025-09-29T12:31:35.583575944Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:31:36 functional-782022 containerd[3890]: time="2025-09-29T12:31:36.236881441Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:31:38 functional-782022 containerd[3890]: time="2025-09-29T12:31:38.105788755Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:31:38 functional-782022 containerd[3890]: time="2025-09-29T12:31:38.105833684Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=10997"
	Sep 29 12:31:38 functional-782022 containerd[3890]: time="2025-09-29T12:31:38.581501193Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	Sep 29 12:31:38 functional-782022 containerd[3890]: time="2025-09-29T12:31:38.583265122Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:31:39 functional-782022 containerd[3890]: time="2025-09-29T12:31:39.246275421Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:31:41 functional-782022 containerd[3890]: time="2025-09-29T12:31:41.107921666Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:31:41 functional-782022 containerd[3890]: time="2025-09-29T12:31:41.107989190Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=10965"
	Sep 29 12:31:41 functional-782022 containerd[3890]: time="2025-09-29T12:31:41.108732856Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
	Sep 29 12:31:41 functional-782022 containerd[3890]: time="2025-09-29T12:31:41.110049060Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:31:41 functional-782022 containerd[3890]: time="2025-09-29T12:31:41.776477554Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:31:43 functional-782022 containerd[3890]: time="2025-09-29T12:31:43.653142357Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:31:43 functional-782022 containerd[3890]: time="2025-09-29T12:31:43.653223777Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=10967"
	
	
	==> coredns [0bbcc10ce4fbd6447f09bdba14f490c4feeec967a90d14aae73d8da93b645593] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48424 - 35954 "HINFO IN 1036536613831132561.4260401396292498277. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.044239158s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [43db49e8ed43f24eb4141339439623039f1c25d6fa08c6a6973f5121b66d3b14] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35447 - 30519 "HINFO IN 5997367880324166599.6195756209000097168. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.028452503s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-782022
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-782022
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=functional-782022
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T12_24_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 12:24:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-782022
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 12:31:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 12:31:42 +0000   Mon, 29 Sep 2025 12:24:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 12:31:42 +0000   Mon, 29 Sep 2025 12:24:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 12:31:42 +0000   Mon, 29 Sep 2025 12:24:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 12:31:42 +0000   Mon, 29 Sep 2025 12:24:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-782022
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 4e290b74a50b4c5797f445ba16a9585a
	  System UUID:                f8a45455-22cc-4de2-93e1-996efdb799ef
	  Boot ID:                    c950b162-3ea4-4410-8c2e-1238f18b29b9
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-gv7jt                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	  default                     hello-node-connect-7d85dfc575-9rv7l          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m6s
	  default                     mysql-5bb876957f-2z7m2                       600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     25s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m6s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 coredns-66bc5c9577-zm6rn                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m29s
	  kube-system                 etcd-functional-782022                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m35s
	  kube-system                 kindnet-gk4hp                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m29s
	  kube-system                 kube-apiserver-functional-782022             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m32s
	  kube-system                 kube-controller-manager-functional-782022    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m36s
	  kube-system                 kube-proxy-dnlcd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m29s
	  kube-system                 kube-scheduler-functional-782022             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m28s                  kube-proxy       
	  Normal  Starting                 6m27s                  kube-proxy       
	  Normal  NodeHasSufficientPID     7m35s                  kubelet          Node functional-782022 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m35s                  kubelet          Node functional-782022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m35s                  kubelet          Node functional-782022 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m35s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m30s                  node-controller  Node functional-782022 event: Registered Node functional-782022 in Controller
	  Normal  Starting                 6m35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m35s (x8 over 6m35s)  kubelet          Node functional-782022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m35s (x8 over 6m35s)  kubelet          Node functional-782022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m35s (x7 over 6m35s)  kubelet          Node functional-782022 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m30s                  node-controller  Node functional-782022 event: Registered Node functional-782022 in Controller
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 02 e7 e8 51 10 6b 08 06
	[  +1.517728] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae a5 e4 37 95 62 08 06
	[  +0.115888] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 81 e5 e6 16 48 08 06
	[ +12.890125] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 a3 59 25 5e a0 08 06
	[  +0.000394] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 02 e7 e8 51 10 6b 08 06
	[  +5.179291] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e f5 e3 4f f3 1f 08 06
	[Sep29 12:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e 41 b4 9f 67 06 08 06
	[ +13.445656] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 1e 7c f1 b5 0d 08 06
	[  +0.000381] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5a 81 e5 e6 16 48 08 06
	[  +7.699318] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 ba 46 0d 66 00 08 06
	[  +0.000403] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0e 41 b4 9f 67 06 08 06
	[  +4.637857] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 16 6b 9e 59 3c 08 06
	[  +0.000369] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e f5 e3 4f f3 1f 08 06
	
	
	==> etcd [b0de26d17b60cee3ea0ffbb0def9bf68b3b7b8fec1a615de7e7e3b1ffdbf44d3] <==
	{"level":"warn","ts":"2025-09-29T12:24:09.508315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:24:09.514616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:24:09.520742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:24:09.528061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:24:09.539044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:24:09.545262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:24:09.552397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55366","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T12:25:10.869561Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T12:25:10.869658Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-782022","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-29T12:25:10.869764Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T12:25:10.871378Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T12:25:10.871458Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:25:10.871477Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-29T12:25:10.871570Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-29T12:25:10.871571Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:25:10.871554Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-09-29T12:25:10.871603Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-09-29T12:25:10.871602Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:25:10.871615Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T12:25:10.871618Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-09-29T12:25:10.871626Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:25:10.873625Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-29T12:25:10.873686Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:25:10.873718Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-29T12:25:10.873728Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-782022","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [f2c5ba8dccddf943d6966d386ff390e8a8543cb1ecced1f03cc46a777ec12f12] <==
	{"level":"warn","ts":"2025-09-29T12:25:13.975463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:13.984045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:13.990602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:13.996935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.005286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.011672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.017730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.029057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.034829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.042108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.048270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.054551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.060723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.067227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.073406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.081074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.088521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.095420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.102293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.109042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.126214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.129431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.136215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.142728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.187334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42494","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:31:47 up  5:14,  0 users,  load average: 0.38, 0.74, 1.64
	Linux functional-782022 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [4a8d7cb63e108e5508f3caa9c423a89228d2d7dd659718cf9c657cb63b960bd6] <==
	I0929 12:24:19.051272       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0929 12:24:19.051515       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0929 12:24:19.051679       1 main.go:148] setting mtu 1500 for CNI 
	I0929 12:24:19.051696       1 main.go:178] kindnetd IP family: "ipv4"
	I0929 12:24:19.051709       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-29T12:24:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0929 12:24:19.254620       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0929 12:24:19.254648       1 controller.go:381] "Waiting for informer caches to sync"
	I0929 12:24:19.254660       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0929 12:24:19.332675       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0929 12:24:19.732111       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0929 12:24:19.732144       1 metrics.go:72] Registering metrics
	I0929 12:24:19.732204       1 controller.go:711] "Syncing nftables rules"
	I0929 12:24:29.255389       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:24:29.255476       1 main.go:301] handling current node
	I0929 12:24:39.258047       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:24:39.258081       1 main.go:301] handling current node
	I0929 12:24:49.264050       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:24:49.264086       1 main.go:301] handling current node
	I0929 12:24:59.256428       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:24:59.256509       1 main.go:301] handling current node
	
	
	==> kindnet [7184ba4a9a391b6fe9af875c4cf7b7ec1e446596766a273e493fd073ac49febe] <==
	I0929 12:29:42.038130       1 main.go:301] handling current node
	I0929 12:29:52.039544       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:29:52.039586       1 main.go:301] handling current node
	I0929 12:30:02.034367       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:30:02.034420       1 main.go:301] handling current node
	I0929 12:30:12.039234       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:30:12.039270       1 main.go:301] handling current node
	I0929 12:30:22.036791       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:30:22.036839       1 main.go:301] handling current node
	I0929 12:30:32.035433       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:30:32.035799       1 main.go:301] handling current node
	I0929 12:30:42.043032       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:30:42.043069       1 main.go:301] handling current node
	I0929 12:30:52.038560       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:30:52.038598       1 main.go:301] handling current node
	I0929 12:31:02.036033       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:31:02.036079       1 main.go:301] handling current node
	I0929 12:31:12.035077       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:31:12.035112       1 main.go:301] handling current node
	I0929 12:31:22.043052       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:31:22.043091       1 main.go:301] handling current node
	I0929 12:31:32.038241       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:31:32.038276       1 main.go:301] handling current node
	I0929 12:31:42.036328       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:31:42.036368       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bcc5e572f0ecc825bbe90fc103944f2ce30f09993a6a7567dc1ecef379c4c13c] <==
	I0929 12:25:15.582064       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0929 12:25:15.622914       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W0929 12:25:15.889391       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0929 12:25:15.890804       1 controller.go:667] quota admission added evaluator for: endpoints
	I0929 12:25:15.896269       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0929 12:25:16.452648       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0929 12:25:16.555398       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0929 12:25:16.619506       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0929 12:25:16.626797       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0929 12:25:18.070419       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0929 12:25:35.630809       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.26.213"}
	I0929 12:25:39.875572       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.96.146.92"}
	I0929 12:25:41.209278       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.99.154.94"}
	I0929 12:25:41.675258       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.109.36.175"}
	I0929 12:26:14.935892       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:26:16.129046       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:27:40.912383       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:27:42.979991       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:28:41.057645       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:29:07.263713       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:29:44.231603       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:30:07.893764       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:30:58.445280       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:31:17.663939       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:31:22.285425       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.103.89.153"}
	
	
	==> kube-controller-manager [11011c88597fc968304178a065fd8bcd3e220e2bbf17e0361a296ac56203173b] <==
	I0929 12:25:17.975884       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0929 12:25:17.975890       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0929 12:25:17.979027       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 12:25:17.981284       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 12:25:17.983642       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 12:25:17.987981       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 12:25:18.003261       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:25:18.008602       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 12:25:18.010845       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0929 12:25:18.015307       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 12:25:18.015736       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0929 12:25:18.015770       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0929 12:25:18.016951       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 12:25:18.016999       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0929 12:25:18.017008       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 12:25:18.017061       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0929 12:25:18.017070       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 12:25:18.017107       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 12:25:18.017114       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 12:25:18.017207       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0929 12:25:18.018697       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0929 12:25:18.021924       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 12:25:18.022150       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:25:18.024225       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0929 12:25:18.038487       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [c72893cda07183eef7eadd6ed0b844f67fdb9a7c8349578dc21bde2e7c064d97] <==
	I0929 12:24:17.256686       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 12:24:17.256794       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 12:24:17.256803       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0929 12:24:17.257121       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0929 12:24:17.257121       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0929 12:24:17.257152       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 12:24:17.257295       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 12:24:17.257472       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 12:24:17.257487       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0929 12:24:17.259069       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 12:24:17.259091       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 12:24:17.259216       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 12:24:17.259310       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-782022"
	I0929 12:24:17.259352       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 12:24:17.260382       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 12:24:17.260468       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0929 12:24:17.260617       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0929 12:24:17.260680       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0929 12:24:17.260695       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0929 12:24:17.260702       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0929 12:24:17.260737       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:24:17.263845       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:24:17.267580       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0929 12:24:17.271688       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-782022" podCIDRs=["10.244.0.0/24"]
	I0929 12:24:17.278010       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [7def58db713d2ef060792b4dec8bbc81106fc7d4ec2b7322f8bfd3474f0c638f] <==
	I0929 12:25:01.693066       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0929 12:25:01.694237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-782022&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 12:25:02.740797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-782022&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 12:25:04.709796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-782022&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 12:25:09.735699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-782022&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0929 12:25:20.593272       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 12:25:20.593323       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 12:25:20.593446       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 12:25:20.616514       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 12:25:20.616586       1 server_linux.go:132] "Using iptables Proxier"
	I0929 12:25:20.622267       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 12:25:20.622658       1 server.go:527] "Version info" version="v1.34.0"
	I0929 12:25:20.622676       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:25:20.624180       1 config.go:200] "Starting service config controller"
	I0929 12:25:20.624202       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 12:25:20.624207       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 12:25:20.624222       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 12:25:20.624247       1 config.go:106] "Starting endpoint slice config controller"
	I0929 12:25:20.624252       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 12:25:20.624279       1 config.go:309] "Starting node config controller"
	I0929 12:25:20.624291       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 12:25:20.725264       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 12:25:20.725297       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 12:25:20.725368       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 12:25:20.725390       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [f8040ae956a29db65d75560d0d961054428ad3355739826fdfa6c4689553ce6c] <==
	I0929 12:24:18.577732       1 server_linux.go:53] "Using iptables proxy"
	I0929 12:24:18.633856       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 12:24:18.734060       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 12:24:18.734113       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 12:24:18.734257       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 12:24:18.850654       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 12:24:18.850737       1 server_linux.go:132] "Using iptables Proxier"
	I0929 12:24:18.857213       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 12:24:18.857575       1 server.go:527] "Version info" version="v1.34.0"
	I0929 12:24:18.857597       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:24:18.859119       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 12:24:18.859526       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 12:24:18.859143       1 config.go:200] "Starting service config controller"
	I0929 12:24:18.859205       1 config.go:309] "Starting node config controller"
	I0929 12:24:18.859571       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 12:24:18.859580       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 12:24:18.859600       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 12:24:18.859300       1 config.go:106] "Starting endpoint slice config controller"
	I0929 12:24:18.859730       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 12:24:18.959729       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 12:24:18.959820       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 12:24:18.959861       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a87bec1b3ee14c32cd766e09316d631522aea0fcba6f24d9c0d707e90c6859a0] <==
	E0929 12:24:10.048487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 12:24:10.048503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 12:24:10.048519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 12:24:10.048543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 12:24:10.048540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 12:24:10.048541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 12:24:10.048653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 12:24:10.049080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 12:24:10.049118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 12:24:10.884036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 12:24:10.973647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 12:24:10.995046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 12:24:11.049696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 12:24:11.081784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 12:24:11.196404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 12:24:11.198401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 12:24:11.218661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 12:24:11.362047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0929 12:24:13.945049       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:25:10.988013       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:25:10.988132       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 12:25:10.988193       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 12:25:10.988224       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 12:25:10.988263       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 12:25:10.988295       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f45ad0c405faa9c17629b0b8f69da5b981a87271e7727e52d833a13ae68a780b] <==
	I0929 12:25:14.313310       1 serving.go:386] Generated self-signed cert in-memory
	I0929 12:25:14.639875       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 12:25:14.639900       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:25:14.644894       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0929 12:25:14.644913       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:25:14.644912       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 12:25:14.644932       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0929 12:25:14.644936       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:25:14.644941       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 12:25:14.645395       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 12:25:14.645456       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 12:25:14.745516       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 12:25:14.745524       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:25:14.745526       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Sep 29 12:31:18 functional-782022 kubelet[4825]: E0929 12:31:18.581555    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="6f4f78d3-af0d-455f-a1a8-728cfbe1024e"
	Sep 29 12:31:21 functional-782022 kubelet[4825]: E0929 12:31:21.581087    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-gv7jt" podUID="b661658c-5c1b-440c-937c-5f64eae745c1"
	Sep 29 12:31:21 functional-782022 kubelet[4825]: E0929 12:31:21.581087    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="06f2ff8c-eced-4819-bcca-da8efb85234c"
	Sep 29 12:31:22 functional-782022 kubelet[4825]: I0929 12:31:22.409439    4825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qzbr\" (UniqueName: \"kubernetes.io/projected/e3e3d6a7-8d43-48c1-8a0e-c35b42f327b4-kube-api-access-9qzbr\") pod \"mysql-5bb876957f-2z7m2\" (UID: \"e3e3d6a7-8d43-48c1-8a0e-c35b42f327b4\") " pod="default/mysql-5bb876957f-2z7m2"
	Sep 29 12:31:24 functional-782022 kubelet[4825]: E0929 12:31:24.581400    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-9rv7l" podUID="ae8a4896-b6e0-4b77-979c-178f02f8aed1"
	Sep 29 12:31:25 functional-782022 kubelet[4825]: E0929 12:31:25.277895    4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Sep 29 12:31:25 functional-782022 kubelet[4825]: E0929 12:31:25.277952    4825 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Sep 29 12:31:25 functional-782022 kubelet[4825]: E0929 12:31:25.278085    4825 kuberuntime_manager.go:1449] "Unhandled Error" err="container mysql start failed in pod mysql-5bb876957f-2z7m2_default(e3e3d6a7-8d43-48c1-8a0e-c35b42f327b4): ErrImagePull: failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 12:31:25 functional-782022 kubelet[4825]: E0929 12:31:25.278140    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-2z7m2" podUID="e3e3d6a7-8d43-48c1-8a0e-c35b42f327b4"
	Sep 29 12:31:25 functional-782022 kubelet[4825]: E0929 12:31:25.482622    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-2z7m2" podUID="e3e3d6a7-8d43-48c1-8a0e-c35b42f327b4"
	Sep 29 12:31:29 functional-782022 kubelet[4825]: E0929 12:31:29.581679    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="6f4f78d3-af0d-455f-a1a8-728cfbe1024e"
	Sep 29 12:31:34 functional-782022 kubelet[4825]: E0929 12:31:34.581205    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="06f2ff8c-eced-4819-bcca-da8efb85234c"
	Sep 29 12:31:38 functional-782022 kubelet[4825]: E0929 12:31:38.106110    4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Sep 29 12:31:38 functional-782022 kubelet[4825]: E0929 12:31:38.106171    4825 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Sep 29 12:31:38 functional-782022 kubelet[4825]: E0929 12:31:38.106266    4825 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-gv7jt_default(b661658c-5c1b-440c-937c-5f64eae745c1): ErrImagePull: failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 12:31:38 functional-782022 kubelet[4825]: E0929 12:31:38.106304    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-gv7jt" podUID="b661658c-5c1b-440c-937c-5f64eae745c1"
	Sep 29 12:31:39 functional-782022 kubelet[4825]: E0929 12:31:39.581441    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-9rv7l" podUID="ae8a4896-b6e0-4b77-979c-178f02f8aed1"
	Sep 29 12:31:41 functional-782022 kubelet[4825]: E0929 12:31:41.108216    4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Sep 29 12:31:41 functional-782022 kubelet[4825]: E0929 12:31:41.108269    4825 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Sep 29 12:31:41 functional-782022 kubelet[4825]: E0929 12:31:41.108471    4825 kuberuntime_manager.go:1449] "Unhandled Error" err="container mysql start failed in pod mysql-5bb876957f-2z7m2_default(e3e3d6a7-8d43-48c1-8a0e-c35b42f327b4): ErrImagePull: failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 12:31:41 functional-782022 kubelet[4825]: E0929 12:31:41.108539    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-2z7m2" podUID="e3e3d6a7-8d43-48c1-8a0e-c35b42f327b4"
	Sep 29 12:31:43 functional-782022 kubelet[4825]: E0929 12:31:43.653498    4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 29 12:31:43 functional-782022 kubelet[4825]: E0929 12:31:43.653558    4825 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 29 12:31:43 functional-782022 kubelet[4825]: E0929 12:31:43.653650    4825 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx-svc_default(6f4f78d3-af0d-455f-a1a8-728cfbe1024e): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 12:31:43 functional-782022 kubelet[4825]: E0929 12:31:43.653692    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="6f4f78d3-af0d-455f-a1a8-728cfbe1024e"
	
	
	==> storage-provisioner [1a1e77b489c91eed8244dea845d1c614d96d31d5eeb78fb695a95f6dcd0cc57d] <==
	I0929 12:25:07.440515       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 12:25:07.443666       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [897d454b937641e694c437680fd88a379717c743494df6b5e5785964165119dd] <==
	W0929 12:31:22.677885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:24.680767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:24.684995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:26.687541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:26.692113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:28.694920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:28.698699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:30.701715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:30.705673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:32.708825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:32.713084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:34.716448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:34.722881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:36.726199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:36.730441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:38.733800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:38.738800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:40.741883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:40.745910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:42.749916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:42.755331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:44.758557       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:44.762622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:46.766125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:31:46.770345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-782022 -n functional-782022
helpers_test.go:269: (dbg) Run:  kubectl --context functional-782022 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-gv7jt hello-node-connect-7d85dfc575-9rv7l mysql-5bb876957f-2z7m2 nginx-svc sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-782022 describe pod hello-node-75c85bcc94-gv7jt hello-node-connect-7d85dfc575-9rv7l mysql-5bb876957f-2z7m2 nginx-svc sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-782022 describe pod hello-node-75c85bcc94-gv7jt hello-node-connect-7d85dfc575-9rv7l mysql-5bb876957f-2z7m2 nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-gv7jt
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-782022/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:25:39 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gfsw5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gfsw5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m9s                 default-scheduler  Successfully assigned default/hello-node-75c85bcc94-gv7jt to functional-782022
	  Normal   Pulling    3m3s (x5 over 6m8s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     3m (x5 over 6m6s)    kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m (x5 over 6m6s)    kubelet            Error: ErrImagePull
	  Warning  Failed     62s (x19 over 6m5s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    38s (x21 over 6m5s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-9rv7l
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-782022/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:25:41 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nq9cb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nq9cb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m7s                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-9rv7l to functional-782022
	  Normal   Pulling    2m51s (x5 over 6m6s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     2m48s (x5 over 6m1s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m48s (x5 over 6m1s)  kubelet            Error: ErrImagePull
	  Warning  Failed     50s (x20 over 6m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    39s (x21 over 6m)     kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-2z7m2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-782022/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:31:22 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9qzbr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9qzbr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  26s                default-scheduler  Successfully assigned default/mysql-5bb876957f-2z7m2 to functional-782022
	  Normal   BackOff    23s                kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     23s                kubelet            Error: ImagePullBackOff
	  Normal   Pulling    10s (x2 over 26s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     7s (x2 over 23s)   kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7s (x2 over 23s)   kubelet            Error: ErrImagePull
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-782022/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:25:41 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rvshw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rvshw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m7s                  default-scheduler  Successfully assigned default/nginx-svc to functional-782022
	  Normal   Pulling    2m56s (x5 over 6m7s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m53s (x5 over 6m3s)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m53s (x5 over 6m3s)  kubelet            Error: ErrImagePull
	  Warning  Failed     58s (x20 over 6m3s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    43s (x21 over 6m3s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-782022/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:25:46 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lspjd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-lspjd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m2s                   default-scheduler  Successfully assigned default/sp-pod to functional-782022
	  Normal   Pulling    2m50s (x5 over 6m2s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m46s (x5 over 5m58s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m46s (x5 over 5m58s)  kubelet            Error: ErrImagePull
	  Warning  Failed     54s (x19 over 5m58s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    27s (x21 over 5m58s)   kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (369.19s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-782022 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-2z7m2" [e3e3d6a7-8d43-48c1-8a0e-c35b42f327b4] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:337: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-782022 -n functional-782022
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-09-29 12:41:22.633649586 +0000 UTC m=+1447.217672775
functional_test.go:1804: (dbg) Run:  kubectl --context functional-782022 describe po mysql-5bb876957f-2z7m2 -n default
functional_test.go:1804: (dbg) kubectl --context functional-782022 describe po mysql-5bb876957f-2z7m2 -n default:
Name:             mysql-5bb876957f-2z7m2
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-782022/192.168.49.2
Start Time:       Mon, 29 Sep 2025 12:31:22 +0000
Labels:           app=mysql
pod-template-hash=5bb876957f
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9qzbr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-9qzbr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/mysql-5bb876957f-2z7m2 to functional-782022
Normal   Pulling    6m50s (x5 over 10m)     kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     6m47s (x5 over 9m57s)   kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m47s (x5 over 9m57s)   kubelet            Error: ErrImagePull
Warning  Failed     4m49s (x20 over 9m57s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m37s (x21 over 9m57s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
functional_test.go:1804: (dbg) Run:  kubectl --context functional-782022 logs mysql-5bb876957f-2z7m2 -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-782022 logs mysql-5bb876957f-2z7m2 -n default: exit status 1 (70.30994ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-5bb876957f-2z7m2" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-782022 logs mysql-5bb876957f-2z7m2 -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-782022
helpers_test.go:243: (dbg) docker inspect functional-782022:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1786c46f38527c7de691196a7fdb1ca120239ca565db05edd3bce1d85e01f298",
	        "Created": "2025-09-29T12:23:56.273004679Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1134005,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T12:23:56.310015171Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/1786c46f38527c7de691196a7fdb1ca120239ca565db05edd3bce1d85e01f298/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1786c46f38527c7de691196a7fdb1ca120239ca565db05edd3bce1d85e01f298/hostname",
	        "HostsPath": "/var/lib/docker/containers/1786c46f38527c7de691196a7fdb1ca120239ca565db05edd3bce1d85e01f298/hosts",
	        "LogPath": "/var/lib/docker/containers/1786c46f38527c7de691196a7fdb1ca120239ca565db05edd3bce1d85e01f298/1786c46f38527c7de691196a7fdb1ca120239ca565db05edd3bce1d85e01f298-json.log",
	        "Name": "/functional-782022",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-782022:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-782022",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1786c46f38527c7de691196a7fdb1ca120239ca565db05edd3bce1d85e01f298",
	                "LowerDir": "/var/lib/docker/overlay2/3c5ab5481c7a994cd5d59e2e9db0e3dcde4fc8f67196d7f2e7829042bdd20fba-init/diff:/var/lib/docker/overlay2/fbd0ff8837aea1062458ef3b6c2ff01f7caaf77470820d108a1f7ca188c98aa7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3c5ab5481c7a994cd5d59e2e9db0e3dcde4fc8f67196d7f2e7829042bdd20fba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3c5ab5481c7a994cd5d59e2e9db0e3dcde4fc8f67196d7f2e7829042bdd20fba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3c5ab5481c7a994cd5d59e2e9db0e3dcde4fc8f67196d7f2e7829042bdd20fba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-782022",
	                "Source": "/var/lib/docker/volumes/functional-782022/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-782022",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-782022",
	                "name.minikube.sigs.k8s.io": "functional-782022",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "50c97dcabfb7e5784a6ece9564a867bb96ac9a43766c5bddf69d122258ab8e5a",
	            "SandboxKey": "/var/run/docker/netns/50c97dcabfb7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33276"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33277"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33280"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33278"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33279"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-782022": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:3f:40:a7:34:77",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1188f63361a712974d693e573886a912852ad97f16abca5373c8b53f08ee79f7",
	                    "EndpointID": "aaa66c085157d64022cbc7f79013ed8657f004e3bea603739a48e13aecf19afc",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-782022",
	                        "1786c46f3852"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-782022 -n functional-782022
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-782022 logs -n 25: (1.468529621s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-782022 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3370057904/001:/mount2 --alsologtostderr -v=1 │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │                     │
	│ mount          │ -p functional-782022 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3370057904/001:/mount3 --alsologtostderr -v=1 │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:31 UTC │                     │
	│ ssh            │ functional-782022 ssh findmnt -T /mount1                                                                           │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:32 UTC │ 29 Sep 25 12:32 UTC │
	│ ssh            │ functional-782022 ssh findmnt -T /mount2                                                                           │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:32 UTC │ 29 Sep 25 12:32 UTC │
	│ ssh            │ functional-782022 ssh findmnt -T /mount3                                                                           │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:32 UTC │ 29 Sep 25 12:32 UTC │
	│ mount          │ -p functional-782022 --kill=true                                                                                   │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:32 UTC │                     │
	│ start          │ -p functional-782022 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd    │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:32 UTC │                     │
	│ start          │ -p functional-782022 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd              │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:32 UTC │                     │
	│ start          │ -p functional-782022 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd    │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:32 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-782022 --alsologtostderr -v=1                                                     │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:32 UTC │                     │
	│ service        │ functional-782022 service list                                                                                     │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │ 29 Sep 25 12:35 UTC │
	│ service        │ functional-782022 service list -o json                                                                             │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │ 29 Sep 25 12:35 UTC │
	│ service        │ functional-782022 service --namespace=default --https --url hello-node                                             │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │                     │
	│ service        │ functional-782022 service hello-node --url --format={{.IP}}                                                        │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │                     │
	│ service        │ functional-782022 service hello-node --url                                                                         │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │                     │
	│ image          │ functional-782022 image ls --format short --alsologtostderr                                                        │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │ 29 Sep 25 12:35 UTC │
	│ ssh            │ functional-782022 ssh pgrep buildkitd                                                                              │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │                     │
	│ image          │ functional-782022 image build -t localhost/my-image:functional-782022 testdata/build --alsologtostderr             │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │ 29 Sep 25 12:35 UTC │
	│ image          │ functional-782022 image ls --format yaml --alsologtostderr                                                         │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │ 29 Sep 25 12:35 UTC │
	│ image          │ functional-782022 image ls --format json --alsologtostderr                                                         │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │ 29 Sep 25 12:35 UTC │
	│ image          │ functional-782022 image ls --format table --alsologtostderr                                                        │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │ 29 Sep 25 12:35 UTC │
	│ image          │ functional-782022 image ls                                                                                         │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │ 29 Sep 25 12:35 UTC │
	│ update-context │ functional-782022 update-context --alsologtostderr -v=2                                                            │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │ 29 Sep 25 12:35 UTC │
	│ update-context │ functional-782022 update-context --alsologtostderr -v=2                                                            │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │ 29 Sep 25 12:35 UTC │
	│ update-context │ functional-782022 update-context --alsologtostderr -v=2                                                            │ functional-782022 │ jenkins │ v1.37.0 │ 29 Sep 25 12:35 UTC │ 29 Sep 25 12:35 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 12:32:01
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 12:32:01.623393 1152286 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:32:01.623483 1152286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:32:01.623490 1152286 out.go:374] Setting ErrFile to fd 2...
	I0929 12:32:01.623494 1152286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:32:01.623773 1152286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1097891/.minikube/bin
	I0929 12:32:01.624207 1152286 out.go:368] Setting JSON to false
	I0929 12:32:01.625266 1152286 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":18859,"bootTime":1759130263,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 12:32:01.625351 1152286 start.go:140] virtualization: kvm guest
	I0929 12:32:01.627031 1152286 out.go:179] * [functional-782022] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0929 12:32:01.628165 1152286 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 12:32:01.628190 1152286 notify.go:220] Checking for updates...
	I0929 12:32:01.630582 1152286 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:32:01.631578 1152286 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 12:32:01.632499 1152286 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1097891/.minikube
	I0929 12:32:01.633553 1152286 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 12:32:01.634463 1152286 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 12:32:01.635841 1152286 config.go:182] Loaded profile config "functional-782022": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 12:32:01.636357 1152286 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:32:01.659705 1152286 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 12:32:01.659797 1152286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:32:01.714441 1152286 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-29 12:32:01.703718947 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:32:01.714537 1152286 docker.go:318] overlay module found
	I0929 12:32:01.716020 1152286 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0929 12:32:01.717088 1152286 start.go:304] selected driver: docker
	I0929 12:32:01.717115 1152286 start.go:924] validating driver "docker" against &{Name:functional-782022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-782022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:32:01.717223 1152286 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 12:32:01.718814 1152286 out.go:203] 
	W0929 12:32:01.719743 1152286 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0929 12:32:01.720687 1152286 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7a8e04cbed167       56cc512116c8f       9 minutes ago       Exited              mount-munger              0                   75e91da73587b       busybox-mount
	897d454b93764       6e38f40d628db       16 minutes ago      Running             storage-provisioner       3                   81f3483cb1bf7       storage-provisioner
	bcc5e572f0ecc       90550c43ad2bc       16 minutes ago      Running             kube-apiserver            0                   1eff7df7d085a       kube-apiserver-functional-782022
	f45ad0c405faa       46169d968e920       16 minutes ago      Running             kube-scheduler            1                   1ab56f1d7c59d       kube-scheduler-functional-782022
	f2c5ba8dccddf       5f1f5298c888d       16 minutes ago      Running             etcd                      1                   684c3b624bc22       etcd-functional-782022
	11011c88597fc       a0af72f2ec6d6       16 minutes ago      Running             kube-controller-manager   1                   8d20f6e965d70       kube-controller-manager-functional-782022
	1a1e77b489c91       6e38f40d628db       16 minutes ago      Exited              storage-provisioner       2                   81f3483cb1bf7       storage-provisioner
	0bbcc10ce4fbd       52546a367cc9e       16 minutes ago      Running             coredns                   1                   05c745f61eda2       coredns-66bc5c9577-zm6rn
	7184ba4a9a391       409467f978b4a       16 minutes ago      Running             kindnet-cni               1                   bbcc18db59f8f       kindnet-gk4hp
	7def58db713d2       df0860106674d       16 minutes ago      Running             kube-proxy                1                   560b48c7f2dd0       kube-proxy-dnlcd
	43db49e8ed43f       52546a367cc9e       16 minutes ago      Exited              coredns                   0                   05c745f61eda2       coredns-66bc5c9577-zm6rn
	4a8d7cb63e108       409467f978b4a       17 minutes ago      Exited              kindnet-cni               0                   bbcc18db59f8f       kindnet-gk4hp
	f8040ae956a29       df0860106674d       17 minutes ago      Exited              kube-proxy                0                   560b48c7f2dd0       kube-proxy-dnlcd
	b0de26d17b60c       5f1f5298c888d       17 minutes ago      Exited              etcd                      0                   684c3b624bc22       etcd-functional-782022
	c72893cda0718       a0af72f2ec6d6       17 minutes ago      Exited              kube-controller-manager   0                   8d20f6e965d70       kube-controller-manager-functional-782022
	a87bec1b3ee14       46169d968e920       17 minutes ago      Exited              kube-scheduler            0                   1ab56f1d7c59d       kube-scheduler-functional-782022
	
	
	==> containerd <==
	Sep 29 12:37:02 functional-782022 containerd[3890]: time="2025-09-29T12:37:02.582059791Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Sep 29 12:37:02 functional-782022 containerd[3890]: time="2025-09-29T12:37:02.584232334Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:37:03 functional-782022 containerd[3890]: time="2025-09-29T12:37:03.252091556Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:37:05 functional-782022 containerd[3890]: time="2025-09-29T12:37:05.120447556Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:37:05 functional-782022 containerd[3890]: time="2025-09-29T12:37:05.120498532Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=10967"
	Sep 29 12:37:08 functional-782022 containerd[3890]: time="2025-09-29T12:37:08.581865432Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Sep 29 12:37:08 functional-782022 containerd[3890]: time="2025-09-29T12:37:08.583622360Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:37:09 functional-782022 containerd[3890]: time="2025-09-29T12:37:09.236323315Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:37:11 functional-782022 containerd[3890]: time="2025-09-29T12:37:11.105363091Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:37:11 functional-782022 containerd[3890]: time="2025-09-29T12:37:11.105394687Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=10999"
	Sep 29 12:37:23 functional-782022 containerd[3890]: time="2025-09-29T12:37:23.581913029Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	Sep 29 12:37:23 functional-782022 containerd[3890]: time="2025-09-29T12:37:23.583541339Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:37:24 functional-782022 containerd[3890]: time="2025-09-29T12:37:24.257119666Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:37:26 functional-782022 containerd[3890]: time="2025-09-29T12:37:26.117310498Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:37:26 functional-782022 containerd[3890]: time="2025-09-29T12:37:26.117407907Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=10966"
	Sep 29 12:37:50 functional-782022 containerd[3890]: time="2025-09-29T12:37:50.582088443Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Sep 29 12:37:50 functional-782022 containerd[3890]: time="2025-09-29T12:37:50.583828760Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:37:51 functional-782022 containerd[3890]: time="2025-09-29T12:37:51.240650065Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:37:53 functional-782022 containerd[3890]: time="2025-09-29T12:37:53.102207368Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:37:53 functional-782022 containerd[3890]: time="2025-09-29T12:37:53.102253422Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	Sep 29 12:38:03 functional-782022 containerd[3890]: time="2025-09-29T12:38:03.582195114Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 29 12:38:03 functional-782022 containerd[3890]: time="2025-09-29T12:38:03.583799746Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:38:04 functional-782022 containerd[3890]: time="2025-09-29T12:38:04.254209330Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 12:38:06 functional-782022 containerd[3890]: time="2025-09-29T12:38:06.104290420Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:38:06 functional-782022 containerd[3890]: time="2025-09-29T12:38:06.104337978Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	
	
	==> coredns [0bbcc10ce4fbd6447f09bdba14f490c4feeec967a90d14aae73d8da93b645593] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48424 - 35954 "HINFO IN 1036536613831132561.4260401396292498277. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.044239158s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [43db49e8ed43f24eb4141339439623039f1c25d6fa08c6a6973f5121b66d3b14] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35447 - 30519 "HINFO IN 5997367880324166599.6195756209000097168. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.028452503s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-782022
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-782022
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=functional-782022
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T12_24_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 12:24:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-782022
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 12:41:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 12:39:42 +0000   Mon, 29 Sep 2025 12:24:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 12:39:42 +0000   Mon, 29 Sep 2025 12:24:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 12:39:42 +0000   Mon, 29 Sep 2025 12:24:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 12:39:42 +0000   Mon, 29 Sep 2025 12:24:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-782022
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 4e290b74a50b4c5797f445ba16a9585a
	  System UUID:                f8a45455-22cc-4de2-93e1-996efdb799ef
	  Boot ID:                    c950b162-3ea4-4410-8c2e-1238f18b29b9
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-gv7jt                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     hello-node-connect-7d85dfc575-9rv7l           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     mysql-5bb876957f-2z7m2                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-66bc5c9577-zm6rn                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     17m
	  kube-system                 etcd-functional-782022                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         17m
	  kube-system                 kindnet-gk4hp                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-functional-782022              250m (3%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-functional-782022     200m (2%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-dnlcd                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-functional-782022              100m (1%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-lgh7n    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m21s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8bxd5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  Starting                 16m                kube-proxy       
	  Normal  NodeHasSufficientPID     17m                kubelet          Node functional-782022 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m                kubelet          Node functional-782022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m                kubelet          Node functional-782022 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           17m                node-controller  Node functional-782022 event: Registered Node functional-782022 in Controller
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node functional-782022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node functional-782022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node functional-782022 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16m                node-controller  Node functional-782022 event: Registered Node functional-782022 in Controller
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 02 e7 e8 51 10 6b 08 06
	[  +1.517728] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae a5 e4 37 95 62 08 06
	[  +0.115888] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 81 e5 e6 16 48 08 06
	[ +12.890125] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 a3 59 25 5e a0 08 06
	[  +0.000394] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 02 e7 e8 51 10 6b 08 06
	[  +5.179291] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e f5 e3 4f f3 1f 08 06
	[Sep29 12:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e 41 b4 9f 67 06 08 06
	[ +13.445656] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 1e 7c f1 b5 0d 08 06
	[  +0.000381] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5a 81 e5 e6 16 48 08 06
	[  +7.699318] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 ba 46 0d 66 00 08 06
	[  +0.000403] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0e 41 b4 9f 67 06 08 06
	[  +4.637857] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 16 6b 9e 59 3c 08 06
	[  +0.000369] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e f5 e3 4f f3 1f 08 06
	
	
	==> etcd [b0de26d17b60cee3ea0ffbb0def9bf68b3b7b8fec1a615de7e7e3b1ffdbf44d3] <==
	{"level":"warn","ts":"2025-09-29T12:24:09.508315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:24:09.514616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:24:09.520742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:24:09.528061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:24:09.539044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:24:09.545262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:24:09.552397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55366","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T12:25:10.869561Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T12:25:10.869658Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-782022","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-29T12:25:10.869764Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T12:25:10.871378Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T12:25:10.871458Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:25:10.871477Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-29T12:25:10.871570Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-29T12:25:10.871571Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:25:10.871554Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-09-29T12:25:10.871603Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-09-29T12:25:10.871602Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:25:10.871615Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T12:25:10.871618Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-09-29T12:25:10.871626Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:25:10.873625Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-29T12:25:10.873686Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:25:10.873718Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-29T12:25:10.873728Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-782022","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [f2c5ba8dccddf943d6966d386ff390e8a8543cb1ecced1f03cc46a777ec12f12] <==
	{"level":"warn","ts":"2025-09-29T12:25:14.017730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.029057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.034829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.042108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.048270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.054551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.060723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.067227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.073406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.081074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.088521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.095420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.102293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.109042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.126214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.129431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.136215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.142728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:25:14.187334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42494","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T12:35:13.684888Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1072}
	{"level":"info","ts":"2025-09-29T12:35:13.705722Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1072,"took":"20.470272ms","hash":1187297717,"current-db-size-bytes":3743744,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":1847296,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-09-29T12:35:13.705768Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1187297717,"revision":1072,"compact-revision":-1}
	{"level":"info","ts":"2025-09-29T12:40:13.690668Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1574}
	{"level":"info","ts":"2025-09-29T12:40:13.694431Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1574,"took":"3.32564ms","hash":2615398914,"current-db-size-bytes":3743744,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":2560000,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2025-09-29T12:40:13.694473Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2615398914,"revision":1574,"compact-revision":1072}
	
	
	==> kernel <==
	 12:41:24 up  5:23,  0 users,  load average: 0.13, 0.25, 0.95
	Linux functional-782022 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [4a8d7cb63e108e5508f3caa9c423a89228d2d7dd659718cf9c657cb63b960bd6] <==
	I0929 12:24:19.051272       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0929 12:24:19.051515       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0929 12:24:19.051679       1 main.go:148] setting mtu 1500 for CNI 
	I0929 12:24:19.051696       1 main.go:178] kindnetd IP family: "ipv4"
	I0929 12:24:19.051709       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-29T12:24:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0929 12:24:19.254620       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0929 12:24:19.254648       1 controller.go:381] "Waiting for informer caches to sync"
	I0929 12:24:19.254660       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0929 12:24:19.332675       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0929 12:24:19.732111       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0929 12:24:19.732144       1 metrics.go:72] Registering metrics
	I0929 12:24:19.732204       1 controller.go:711] "Syncing nftables rules"
	I0929 12:24:29.255389       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:24:29.255476       1 main.go:301] handling current node
	I0929 12:24:39.258047       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:24:39.258081       1 main.go:301] handling current node
	I0929 12:24:49.264050       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:24:49.264086       1 main.go:301] handling current node
	I0929 12:24:59.256428       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:24:59.256509       1 main.go:301] handling current node
	
	
	==> kindnet [7184ba4a9a391b6fe9af875c4cf7b7ec1e446596766a273e493fd073ac49febe] <==
	I0929 12:39:22.035354       1 main.go:301] handling current node
	I0929 12:39:32.034312       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:39:32.034352       1 main.go:301] handling current node
	I0929 12:39:42.039697       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:39:42.039737       1 main.go:301] handling current node
	I0929 12:39:52.034238       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:39:52.034302       1 main.go:301] handling current node
	I0929 12:40:02.039666       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:40:02.039705       1 main.go:301] handling current node
	I0929 12:40:12.039079       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:40:12.039116       1 main.go:301] handling current node
	I0929 12:40:22.043119       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:40:22.043170       1 main.go:301] handling current node
	I0929 12:40:32.041372       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:40:32.041407       1 main.go:301] handling current node
	I0929 12:40:42.034134       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:40:42.034174       1 main.go:301] handling current node
	I0929 12:40:52.034200       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:40:52.034246       1 main.go:301] handling current node
	I0929 12:41:02.039219       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:41:02.039255       1 main.go:301] handling current node
	I0929 12:41:12.038647       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:41:12.038691       1 main.go:301] handling current node
	I0929 12:41:22.036878       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 12:41:22.036934       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bcc5e572f0ecc825bbe90fc103944f2ce30f09993a6a7567dc1ecef379c4c13c] <==
	I0929 12:29:44.231603       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:30:07.893764       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:30:58.445280       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:31:17.663939       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:31:22.285425       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.103.89.153"}
	I0929 12:32:02.602777       1 controller.go:667] quota admission added evaluator for: namespaces
	I0929 12:32:02.704384       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.83.159"}
	I0929 12:32:02.714442       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.200.223"}
	I0929 12:32:13.424480       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:32:26.436667       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:33:18.083588       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:33:41.415637       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:34:38.700855       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:35:03.361330       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:35:14.619132       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 12:35:52.430038       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:36:21.670939       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:37:16.182171       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:37:35.012311       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:38:23.318662       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:38:46.456380       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:39:23.941879       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:40:16.426443       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:40:26.414987       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:41:17.235159       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [11011c88597fc968304178a065fd8bcd3e220e2bbf17e0361a296ac56203173b] <==
	I0929 12:25:18.003261       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:25:18.008602       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 12:25:18.010845       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0929 12:25:18.015307       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 12:25:18.015736       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0929 12:25:18.015770       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0929 12:25:18.016951       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 12:25:18.016999       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0929 12:25:18.017008       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 12:25:18.017061       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0929 12:25:18.017070       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 12:25:18.017107       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 12:25:18.017114       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 12:25:18.017207       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0929 12:25:18.018697       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0929 12:25:18.021924       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 12:25:18.022150       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:25:18.024225       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0929 12:25:18.038487       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0929 12:32:02.648029       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 12:32:02.652165       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 12:32:02.652282       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 12:32:02.655219       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 12:32:02.656606       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 12:32:02.660668       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [c72893cda07183eef7eadd6ed0b844f67fdb9a7c8349578dc21bde2e7c064d97] <==
	I0929 12:24:17.256686       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 12:24:17.256794       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 12:24:17.256803       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0929 12:24:17.257121       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0929 12:24:17.257121       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0929 12:24:17.257152       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 12:24:17.257295       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 12:24:17.257472       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 12:24:17.257487       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0929 12:24:17.259069       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 12:24:17.259091       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 12:24:17.259216       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 12:24:17.259310       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-782022"
	I0929 12:24:17.259352       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 12:24:17.260382       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 12:24:17.260468       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0929 12:24:17.260617       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0929 12:24:17.260680       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0929 12:24:17.260695       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0929 12:24:17.260702       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0929 12:24:17.260737       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:24:17.263845       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:24:17.267580       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0929 12:24:17.271688       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-782022" podCIDRs=["10.244.0.0/24"]
	I0929 12:24:17.278010       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [7def58db713d2ef060792b4dec8bbc81106fc7d4ec2b7322f8bfd3474f0c638f] <==
	I0929 12:25:01.693066       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0929 12:25:01.694237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-782022&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 12:25:02.740797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-782022&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 12:25:04.709796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-782022&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 12:25:09.735699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-782022&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0929 12:25:20.593272       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 12:25:20.593323       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 12:25:20.593446       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 12:25:20.616514       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 12:25:20.616586       1 server_linux.go:132] "Using iptables Proxier"
	I0929 12:25:20.622267       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 12:25:20.622658       1 server.go:527] "Version info" version="v1.34.0"
	I0929 12:25:20.622676       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:25:20.624180       1 config.go:200] "Starting service config controller"
	I0929 12:25:20.624202       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 12:25:20.624207       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 12:25:20.624222       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 12:25:20.624247       1 config.go:106] "Starting endpoint slice config controller"
	I0929 12:25:20.624252       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 12:25:20.624279       1 config.go:309] "Starting node config controller"
	I0929 12:25:20.624291       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 12:25:20.725264       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 12:25:20.725297       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 12:25:20.725368       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 12:25:20.725390       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [f8040ae956a29db65d75560d0d961054428ad3355739826fdfa6c4689553ce6c] <==
	I0929 12:24:18.577732       1 server_linux.go:53] "Using iptables proxy"
	I0929 12:24:18.633856       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 12:24:18.734060       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 12:24:18.734113       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 12:24:18.734257       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 12:24:18.850654       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 12:24:18.850737       1 server_linux.go:132] "Using iptables Proxier"
	I0929 12:24:18.857213       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 12:24:18.857575       1 server.go:527] "Version info" version="v1.34.0"
	I0929 12:24:18.857597       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:24:18.859119       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 12:24:18.859526       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 12:24:18.859143       1 config.go:200] "Starting service config controller"
	I0929 12:24:18.859205       1 config.go:309] "Starting node config controller"
	I0929 12:24:18.859571       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 12:24:18.859580       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 12:24:18.859600       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 12:24:18.859300       1 config.go:106] "Starting endpoint slice config controller"
	I0929 12:24:18.859730       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 12:24:18.959729       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 12:24:18.959820       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 12:24:18.959861       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a87bec1b3ee14c32cd766e09316d631522aea0fcba6f24d9c0d707e90c6859a0] <==
	E0929 12:24:10.048487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 12:24:10.048503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 12:24:10.048519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 12:24:10.048543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 12:24:10.048540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 12:24:10.048541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 12:24:10.048653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 12:24:10.049080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 12:24:10.049118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 12:24:10.884036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 12:24:10.973647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 12:24:10.995046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 12:24:11.049696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 12:24:11.081784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 12:24:11.196404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 12:24:11.198401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 12:24:11.218661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 12:24:11.362047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0929 12:24:13.945049       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:25:10.988013       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:25:10.988132       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 12:25:10.988193       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 12:25:10.988224       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 12:25:10.988263       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 12:25:10.988295       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f45ad0c405faa9c17629b0b8f69da5b981a87271e7727e52d833a13ae68a780b] <==
	I0929 12:25:14.313310       1 serving.go:386] Generated self-signed cert in-memory
	I0929 12:25:14.639875       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 12:25:14.639900       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:25:14.644894       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0929 12:25:14.644913       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:25:14.644912       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 12:25:14.644932       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0929 12:25:14.644936       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:25:14.644941       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 12:25:14.645395       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 12:25:14.645456       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 12:25:14.745516       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 12:25:14.745524       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:25:14.745526       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Sep 29 12:40:43 functional-782022 kubelet[4825]: E0929 12:40:43.581901    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8bxd5" podUID="5f8be926-b1a2-41b6-aa79-115ced6fb907"
	Sep 29 12:40:44 functional-782022 kubelet[4825]: E0929 12:40:44.581508    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-2z7m2" podUID="e3e3d6a7-8d43-48c1-8a0e-c35b42f327b4"
	Sep 29 12:40:46 functional-782022 kubelet[4825]: E0929 12:40:46.580560    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="06f2ff8c-eced-4819-bcca-da8efb85234c"
	Sep 29 12:40:47 functional-782022 kubelet[4825]: E0929 12:40:47.581383    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-lgh7n" podUID="d0347370-124a-4ceb-87db-90f1
0e018aa4"
	Sep 29 12:40:48 functional-782022 kubelet[4825]: E0929 12:40:48.580898    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-9rv7l" podUID="ae8a4896-b6e0-4b77-979c-178f02f8aed1"
	Sep 29 12:40:48 functional-782022 kubelet[4825]: E0929 12:40:48.581562    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="6f4f78d3-af0d-455f-a1a8-728cfbe1024e"
	Sep 29 12:40:52 functional-782022 kubelet[4825]: E0929 12:40:52.581548    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-gv7jt" podUID="b661658c-5c1b-440c-937c-5f64eae745c1"
	Sep 29 12:40:54 functional-782022 kubelet[4825]: E0929 12:40:54.581439    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8bxd5" podUID="5f8be926-b1a2-41b6-aa79-115ced6fb907"
	Sep 29 12:40:57 functional-782022 kubelet[4825]: E0929 12:40:57.581466    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-2z7m2" podUID="e3e3d6a7-8d43-48c1-8a0e-c35b42f327b4"
	Sep 29 12:40:59 functional-782022 kubelet[4825]: E0929 12:40:59.581787    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="6f4f78d3-af0d-455f-a1a8-728cfbe1024e"
	Sep 29 12:41:00 functional-782022 kubelet[4825]: E0929 12:41:00.580914    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="06f2ff8c-eced-4819-bcca-da8efb85234c"
	Sep 29 12:41:01 functional-782022 kubelet[4825]: E0929 12:41:01.581258    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-lgh7n" podUID="d0347370-124a-4ceb-87db-90f1
0e018aa4"
	Sep 29 12:41:02 functional-782022 kubelet[4825]: E0929 12:41:02.580993    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-9rv7l" podUID="ae8a4896-b6e0-4b77-979c-178f02f8aed1"
	Sep 29 12:41:04 functional-782022 kubelet[4825]: E0929 12:41:04.581118    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-gv7jt" podUID="b661658c-5c1b-440c-937c-5f64eae745c1"
	Sep 29 12:41:05 functional-782022 kubelet[4825]: E0929 12:41:05.581614    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8bxd5" podUID="5f8be926-b1a2-41b6-aa79-115ced6fb907"
	Sep 29 12:41:10 functional-782022 kubelet[4825]: E0929 12:41:10.581693    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="6f4f78d3-af0d-455f-a1a8-728cfbe1024e"
	Sep 29 12:41:12 functional-782022 kubelet[4825]: E0929 12:41:12.581098    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="06f2ff8c-eced-4819-bcca-da8efb85234c"
	Sep 29 12:41:12 functional-782022 kubelet[4825]: E0929 12:41:12.581574    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-2z7m2" podUID="e3e3d6a7-8d43-48c1-8a0e-c35b42f327b4"
	Sep 29 12:41:15 functional-782022 kubelet[4825]: E0929 12:41:15.581448    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-9rv7l" podUID="ae8a4896-b6e0-4b77-979c-178f02f8aed1"
	Sep 29 12:41:16 functional-782022 kubelet[4825]: E0929 12:41:16.582500    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-lgh7n" podUID="d0347370-124a-4ceb-87db-90f1
0e018aa4"
	Sep 29 12:41:18 functional-782022 kubelet[4825]: E0929 12:41:18.580776    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-gv7jt" podUID="b661658c-5c1b-440c-937c-5f64eae745c1"
	Sep 29 12:41:20 functional-782022 kubelet[4825]: E0929 12:41:20.582326    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8bxd5" podUID="5f8be926-b1a2-41b6-aa79-115ced6fb907"
	Sep 29 12:41:22 functional-782022 kubelet[4825]: E0929 12:41:22.581803    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="6f4f78d3-af0d-455f-a1a8-728cfbe1024e"
	Sep 29 12:41:23 functional-782022 kubelet[4825]: E0929 12:41:23.580682    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="06f2ff8c-eced-4819-bcca-da8efb85234c"
	Sep 29 12:41:23 functional-782022 kubelet[4825]: E0929 12:41:23.581275    4825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-2z7m2" podUID="e3e3d6a7-8d43-48c1-8a0e-c35b42f327b4"
	
	
	==> storage-provisioner [1a1e77b489c91eed8244dea845d1c614d96d31d5eeb78fb695a95f6dcd0cc57d] <==
	I0929 12:25:07.440515       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 12:25:07.443666       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [897d454b937641e694c437680fd88a379717c743494df6b5e5785964165119dd] <==
	W0929 12:40:58.873708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:41:00.876550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:41:00.880446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:41:02.883521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:41:02.888249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:41:04.891320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:41:04.895293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:41:06.898906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:41:06.903058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:41:08.906915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:41:08.911772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:41:10.914820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:41:10.918645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:41:12.922395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:41:12.927515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:41:14.930440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:41:14.934416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:41:16.937399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:41:16.942287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:41:18.945111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:41:18.949057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:41:20.952803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:41:20.956731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:41:22.960151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:41:22.964104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-782022 -n functional-782022
helpers_test.go:269: (dbg) Run:  kubectl --context functional-782022 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-gv7jt hello-node-connect-7d85dfc575-9rv7l mysql-5bb876957f-2z7m2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-lgh7n kubernetes-dashboard-855c9754f9-8bxd5
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-782022 describe pod busybox-mount hello-node-75c85bcc94-gv7jt hello-node-connect-7d85dfc575-9rv7l mysql-5bb876957f-2z7m2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-lgh7n kubernetes-dashboard-855c9754f9-8bxd5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-782022 describe pod busybox-mount hello-node-75c85bcc94-gv7jt hello-node-connect-7d85dfc575-9rv7l mysql-5bb876957f-2z7m2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-lgh7n kubernetes-dashboard-855c9754f9-8bxd5: exit status 1 (102.520061ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-782022/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:31:52 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  containerd://7a8e04cbed167cce1d20d82f49a2d721d764fcfe23c8ad518a617952ec22d7d4
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 29 Sep 2025 12:31:55 +0000
	      Finished:     Mon, 29 Sep 2025 12:31:55 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kqb4t (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-kqb4t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m32s  default-scheduler  Successfully assigned default/busybox-mount to functional-782022
	  Normal  Pulling    9m32s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m30s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.142s (2.142s including waiting). Image size: 2395207 bytes.
	  Normal  Created    9m30s  kubelet            Created container: mount-munger
	  Normal  Started    9m30s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-gv7jt
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-782022/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:25:39 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gfsw5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gfsw5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  15m                 default-scheduler  Successfully assigned default/hello-node-75c85bcc94-gv7jt to functional-782022
	  Normal   Pulling    12m (x5 over 15m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     12m (x5 over 15m)   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     12m (x5 over 15m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    33s (x63 over 15m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     33s (x63 over 15m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-9rv7l
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-782022/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:25:41 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nq9cb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nq9cb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  15m                 default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-9rv7l to functional-782022
	  Normal   Pulling    12m (x5 over 15m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     12m (x5 over 15m)   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     12m (x5 over 15m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    37s (x63 over 15m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     37s (x63 over 15m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             mysql-5bb876957f-2z7m2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-782022/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:31:22 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9qzbr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9qzbr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/mysql-5bb876957f-2z7m2 to functional-782022
	  Normal   Pulling    6m53s (x5 over 10m)   kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     6m50s (x5 over 10m)   kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m50s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m52s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2s (x41 over 10m)     kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-782022/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:25:41 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rvshw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rvshw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  15m                 default-scheduler  Successfully assigned default/nginx-svc to functional-782022
	  Normal   Pulling    12m (x5 over 15m)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     12m (x5 over 15m)   kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     12m (x5 over 15m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    37s (x64 over 15m)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     37s (x64 over 15m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-782022/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 12:25:46 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lspjd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-lspjd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  15m                 default-scheduler  Successfully assigned default/sp-pod to functional-782022
	  Normal   Pulling    12m (x5 over 15m)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     12m (x5 over 15m)   kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     12m (x5 over 15m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    39s (x62 over 15m)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     25s (x63 over 15m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-lgh7n" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-8bxd5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-782022 describe pod busybox-mount hello-node-75c85bcc94-gv7jt hello-node-connect-7d85dfc575-9rv7l mysql-5bb876957f-2z7m2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-lgh7n kubernetes-dashboard-855c9754f9-8bxd5: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (602.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-782022 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-782022 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-gv7jt" [b661658c-5c1b-440c-937c-5f64eae745c1] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-782022 -n functional-782022
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-09-29 12:35:40.185449898 +0000 UTC m=+1104.769473089
functional_test.go:1460: (dbg) Run:  kubectl --context functional-782022 describe po hello-node-75c85bcc94-gv7jt -n default
functional_test.go:1460: (dbg) kubectl --context functional-782022 describe po hello-node-75c85bcc94-gv7jt -n default:
Name:             hello-node-75c85bcc94-gv7jt
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-782022/192.168.49.2
Start Time:       Mon, 29 Sep 2025 12:25:39 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gfsw5 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-gfsw5:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-gv7jt to functional-782022
Normal   Pulling    6m55s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m52s (x5 over 9m58s)   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m52s (x5 over 9m58s)   kubelet            Error: ErrImagePull
Warning  Failed     4m54s (x19 over 9m57s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m30s (x21 over 9m57s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-782022 logs hello-node-75c85bcc94-gv7jt -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-782022 logs hello-node-75c85bcc94-gv7jt -n default: exit status 1 (67.236322ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-gv7jt" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-782022 logs hello-node-75c85bcc94-gv7jt -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-782022 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [6f4f78d3-af0d-455f-a1a8-728cfbe1024e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: WARNING: pod list for "default" "run=nginx-svc" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-782022 -n functional-782022
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-09-29 12:29:41.525175134 +0000 UTC m=+746.109198323
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-782022 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-782022 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-782022/192.168.49.2
Start Time:       Mon, 29 Sep 2025 12:25:41 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:  10.244.0.5
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rvshw (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-rvshw:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/nginx-svc to functional-782022
Normal   Pulling    49s (x5 over 4m)     kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     46s (x5 over 3m56s)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     46s (x5 over 3m56s)  kubelet            Error: ErrImagePull
Normal   BackOff    5s (x14 over 3m56s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     5s (x14 over 3m56s)  kubelet            Error: ImagePullBackOff
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-782022 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-782022 logs nginx-svc -n default: exit status 1 (70.884638ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-782022 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (89.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I0929 12:29:41.661481 1101494 retry.go:31] will retry after 3.228967828s: Temporary Error: Get "http:": http: no Host in request URL
I0929 12:29:44.890601 1101494 retry.go:31] will retry after 4.586846117s: Temporary Error: Get "http:": http: no Host in request URL
I0929 12:29:49.478015 1101494 retry.go:31] will retry after 4.406541163s: Temporary Error: Get "http:": http: no Host in request URL
I0929 12:29:53.884849 1101494 retry.go:31] will retry after 5.473823831s: Temporary Error: Get "http:": http: no Host in request URL
I0929 12:29:59.359365 1101494 retry.go:31] will retry after 19.914008103s: Temporary Error: Get "http:": http: no Host in request URL
I0929 12:30:19.274342 1101494 retry.go:31] will retry after 20.177665675s: Temporary Error: Get "http:": http: no Host in request URL
E0929 12:30:20.683473 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I0929 12:30:39.452904 1101494 retry.go:31] will retry after 31.890709858s: Temporary Error: Get "http:": http: no Host in request URL
E0929 12:30:48.386759 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-782022 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE
nginx-svc   LoadBalancer   10.99.154.94   10.99.154.94   80:32583/TCP   5m30s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (89.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-782022 service --namespace=default --https --url hello-node: exit status 115 (540.839002ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30967
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-782022 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-782022 service hello-node --url --format={{.IP}}: exit status 115 (533.919002ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-782022 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-782022 service hello-node --url: exit status 115 (528.991046ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30967
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-782022 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30967
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (920.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-321209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-321209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: exit status 80 (15m20.506535732s)

                                                
                                                
-- stdout --
	* [calico-321209] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21652
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21652-1097891/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1097891/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "calico-321209" primary control-plane node in "calico-321209" cluster
	* Pulling base image v0.0.48 ...
	* Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 13:05:53.028932 1359411 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:05:53.029265 1359411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:05:53.029278 1359411 out.go:374] Setting ErrFile to fd 2...
	I0929 13:05:53.029284 1359411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:05:53.029660 1359411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1097891/.minikube/bin
	I0929 13:05:53.030304 1359411 out.go:368] Setting JSON to false
	I0929 13:05:53.031515 1359411 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":20890,"bootTime":1759130263,"procs":317,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 13:05:53.031610 1359411 start.go:140] virtualization: kvm guest
	I0929 13:05:53.033389 1359411 out.go:179] * [calico-321209] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 13:05:53.034934 1359411 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 13:05:53.034949 1359411 notify.go:220] Checking for updates...
	I0929 13:05:53.037047 1359411 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 13:05:53.038370 1359411 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 13:05:53.039477 1359411 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1097891/.minikube
	I0929 13:05:53.040629 1359411 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 13:05:53.041786 1359411 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 13:05:53.043693 1359411 config.go:182] Loaded profile config "auto-321209": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 13:05:53.043828 1359411 config.go:182] Loaded profile config "kindnet-321209": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 13:05:53.043903 1359411 config.go:182] Loaded profile config "kubernetes-upgrade-629986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 13:05:53.044020 1359411 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 13:05:53.075572 1359411 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 13:05:53.075664 1359411 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:05:53.136523 1359411 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 13:05:53.125753555 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:05:53.136644 1359411 docker.go:318] overlay module found
	I0929 13:05:53.138148 1359411 out.go:179] * Using the docker driver based on user configuration
	I0929 13:05:53.139180 1359411 start.go:304] selected driver: docker
	I0929 13:05:53.139197 1359411 start.go:924] validating driver "docker" against <nil>
	I0929 13:05:53.139212 1359411 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 13:05:53.139782 1359411 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:05:53.194622 1359411 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 13:05:53.185008702 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:05:53.194817 1359411 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 13:05:53.195079 1359411 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:05:53.196529 1359411 out.go:179] * Using Docker driver with root privileges
	I0929 13:05:53.197527 1359411 cni.go:84] Creating CNI manager for "calico"
	I0929 13:05:53.197545 1359411 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I0929 13:05:53.197614 1359411 start.go:348] cluster config:
	{Name:calico-321209 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-321209 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: N
etworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Auto
PauseInterval:1m0s}
	I0929 13:05:53.198753 1359411 out.go:179] * Starting "calico-321209" primary control-plane node in "calico-321209" cluster
	I0929 13:05:53.199715 1359411 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0929 13:05:53.200693 1359411 out.go:179] * Pulling base image v0.0.48 ...
	I0929 13:05:53.201654 1359411 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0929 13:05:53.201687 1359411 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0929 13:05:53.201697 1359411 cache.go:58] Caching tarball of preloaded images
	I0929 13:05:53.201758 1359411 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 13:05:53.201774 1359411 preload.go:172] Found /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0929 13:05:53.201782 1359411 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0929 13:05:53.201860 1359411 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/calico-321209/config.json ...
	I0929 13:05:53.201889 1359411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/calico-321209/config.json: {Name:mka5010c50f0244204f8eb0cb97663603d62d560 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:05:53.224064 1359411 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 13:05:53.224080 1359411 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 13:05:53.224094 1359411 cache.go:232] Successfully downloaded all kic artifacts
	I0929 13:05:53.224121 1359411 start.go:360] acquireMachinesLock for calico-321209: {Name:mkffdbd85e4e768b780244c2ba3c7254d537d23e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:05:53.224230 1359411 start.go:364] duration metric: took 87.554µs to acquireMachinesLock for "calico-321209"
	I0929 13:05:53.224261 1359411 start.go:93] Provisioning new machine with config: &{Name:calico-321209 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-321209 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0929 13:05:53.224361 1359411 start.go:125] createHost starting for "" (driver="docker")
	I0929 13:05:53.225882 1359411 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0929 13:05:53.226169 1359411 start.go:159] libmachine.API.Create for "calico-321209" (driver="docker")
	I0929 13:05:53.226219 1359411 client.go:168] LocalClient.Create starting
	I0929 13:05:53.226277 1359411 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem
	I0929 13:05:53.226312 1359411 main.go:141] libmachine: Decoding PEM data...
	I0929 13:05:53.226333 1359411 main.go:141] libmachine: Parsing certificate...
	I0929 13:05:53.226417 1359411 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem
	I0929 13:05:53.226503 1359411 main.go:141] libmachine: Decoding PEM data...
	I0929 13:05:53.226553 1359411 main.go:141] libmachine: Parsing certificate...
	I0929 13:05:53.227041 1359411 cli_runner.go:164] Run: docker network inspect calico-321209 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0929 13:05:53.243745 1359411 cli_runner.go:211] docker network inspect calico-321209 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0929 13:05:53.243798 1359411 network_create.go:284] running [docker network inspect calico-321209] to gather additional debugging logs...
	I0929 13:05:53.243816 1359411 cli_runner.go:164] Run: docker network inspect calico-321209
	W0929 13:05:53.261438 1359411 cli_runner.go:211] docker network inspect calico-321209 returned with exit code 1
	I0929 13:05:53.261466 1359411 network_create.go:287] error running [docker network inspect calico-321209]: docker network inspect calico-321209: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-321209 not found
	I0929 13:05:53.261491 1359411 network_create.go:289] output of [docker network inspect calico-321209]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-321209 not found
	
	** /stderr **
	I0929 13:05:53.261610 1359411 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:05:53.280387 1359411 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ea048bcecb48 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fe:2d:df:61:03:8a} reservation:<nil>}
	I0929 13:05:53.281436 1359411 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1bd167e5ce7a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:0f:ec:5b:6d:8a} reservation:<nil>}
	I0929 13:05:53.282512 1359411 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-29d6980ca283 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1e:24:81:41:84:f3} reservation:<nil>}
	I0929 13:05:53.283622 1359411 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c82570}
	I0929 13:05:53.283651 1359411 network_create.go:124] attempt to create docker network calico-321209 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0929 13:05:53.283710 1359411 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-321209 calico-321209
	I0929 13:05:53.348767 1359411 network_create.go:108] docker network calico-321209 192.168.76.0/24 created
	I0929 13:05:53.348793 1359411 kic.go:121] calculated static IP "192.168.76.2" for the "calico-321209" container
	I0929 13:05:53.348860 1359411 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 13:05:53.367393 1359411 cli_runner.go:164] Run: docker volume create calico-321209 --label name.minikube.sigs.k8s.io=calico-321209 --label created_by.minikube.sigs.k8s.io=true
	I0929 13:05:53.386449 1359411 oci.go:103] Successfully created a docker volume calico-321209
	I0929 13:05:53.386526 1359411 cli_runner.go:164] Run: docker run --rm --name calico-321209-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-321209 --entrypoint /usr/bin/test -v calico-321209:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 13:05:53.783664 1359411 oci.go:107] Successfully prepared a docker volume calico-321209
	I0929 13:05:53.783710 1359411 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0929 13:05:53.783743 1359411 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 13:05:53.783807 1359411 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-321209:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 13:05:56.786471 1359411 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-321209:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.002617558s)
	I0929 13:05:56.786531 1359411 kic.go:203] duration metric: took 3.002781217s to extract preloaded images to volume ...
	W0929 13:05:56.786627 1359411 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0929 13:05:56.786666 1359411 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0929 13:05:56.786711 1359411 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 13:05:56.849153 1359411 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-321209 --name calico-321209 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-321209 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-321209 --network calico-321209 --ip 192.168.76.2 --volume calico-321209:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 13:05:57.161753 1359411 cli_runner.go:164] Run: docker container inspect calico-321209 --format={{.State.Running}}
	I0929 13:05:57.181538 1359411 cli_runner.go:164] Run: docker container inspect calico-321209 --format={{.State.Status}}
	I0929 13:05:57.201205 1359411 cli_runner.go:164] Run: docker exec calico-321209 stat /var/lib/dpkg/alternatives/iptables
	I0929 13:05:57.263979 1359411 oci.go:144] the created container "calico-321209" has a running status.
	I0929 13:05:57.264019 1359411 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/calico-321209/id_rsa...
	I0929 13:05:57.716672 1359411 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/calico-321209/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 13:05:57.746719 1359411 cli_runner.go:164] Run: docker container inspect calico-321209 --format={{.State.Status}}
	I0929 13:05:57.769595 1359411 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 13:05:57.769620 1359411 kic_runner.go:114] Args: [docker exec --privileged calico-321209 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 13:05:57.819013 1359411 cli_runner.go:164] Run: docker container inspect calico-321209 --format={{.State.Status}}
	I0929 13:05:57.841723 1359411 machine.go:93] provisionDockerMachine start ...
	I0929 13:05:57.841871 1359411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-321209
	I0929 13:05:57.863406 1359411 main.go:141] libmachine: Using SSH client type: native
	I0929 13:05:57.863672 1359411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33561 <nil> <nil>}
	I0929 13:05:57.863692 1359411 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 13:05:58.009145 1359411 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-321209
	
	I0929 13:05:58.009171 1359411 ubuntu.go:182] provisioning hostname "calico-321209"
	I0929 13:05:58.009230 1359411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-321209
	I0929 13:05:58.027928 1359411 main.go:141] libmachine: Using SSH client type: native
	I0929 13:05:58.028279 1359411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33561 <nil> <nil>}
	I0929 13:05:58.028303 1359411 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-321209 && echo "calico-321209" | sudo tee /etc/hostname
	I0929 13:05:58.186028 1359411 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-321209
	
	I0929 13:05:58.186132 1359411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-321209
	I0929 13:05:58.208411 1359411 main.go:141] libmachine: Using SSH client type: native
	I0929 13:05:58.208642 1359411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33561 <nil> <nil>}
	I0929 13:05:58.208665 1359411 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-321209' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-321209/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-321209' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 13:05:58.356063 1359411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 13:05:58.356098 1359411 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-1097891/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-1097891/.minikube}
	I0929 13:05:58.356164 1359411 ubuntu.go:190] setting up certificates
	I0929 13:05:58.356180 1359411 provision.go:84] configureAuth start
	I0929 13:05:58.356349 1359411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-321209
	I0929 13:05:58.380176 1359411 provision.go:143] copyHostCerts
	I0929 13:05:58.380243 1359411 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.pem, removing ...
	I0929 13:05:58.380258 1359411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.pem
	I0929 13:05:58.380335 1359411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.pem (1078 bytes)
	I0929 13:05:58.380544 1359411 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1097891/.minikube/cert.pem, removing ...
	I0929 13:05:58.380564 1359411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1097891/.minikube/cert.pem
	I0929 13:05:58.380616 1359411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-1097891/.minikube/cert.pem (1123 bytes)
	I0929 13:05:58.380701 1359411 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1097891/.minikube/key.pem, removing ...
	I0929 13:05:58.380708 1359411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1097891/.minikube/key.pem
	I0929 13:05:58.380733 1359411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-1097891/.minikube/key.pem (1679 bytes)
	I0929 13:05:58.380794 1359411 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca-key.pem org=jenkins.calico-321209 san=[127.0.0.1 192.168.76.2 calico-321209 localhost minikube]
	I0929 13:05:58.471846 1359411 provision.go:177] copyRemoteCerts
	I0929 13:05:58.471909 1359411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 13:05:58.471955 1359411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-321209
	I0929 13:05:58.493387 1359411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/calico-321209/id_rsa Username:docker}
	I0929 13:05:58.593159 1359411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 13:05:58.632722 1359411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 13:05:58.672534 1359411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 13:05:58.703120 1359411 provision.go:87] duration metric: took 346.921467ms to configureAuth
	I0929 13:05:58.708039 1359411 ubuntu.go:206] setting minikube options for container-runtime
	I0929 13:05:58.708263 1359411 config.go:182] Loaded profile config "calico-321209": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 13:05:58.708281 1359411 machine.go:96] duration metric: took 866.533117ms to provisionDockerMachine
	I0929 13:05:58.708292 1359411 client.go:171] duration metric: took 5.482060197s to LocalClient.Create
	I0929 13:05:58.708313 1359411 start.go:167] duration metric: took 5.482143932s to libmachine.API.Create "calico-321209"
	I0929 13:05:58.708322 1359411 start.go:293] postStartSetup for "calico-321209" (driver="docker")
	I0929 13:05:58.708333 1359411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 13:05:58.708392 1359411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 13:05:58.708437 1359411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-321209
	I0929 13:05:58.733719 1359411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/calico-321209/id_rsa Username:docker}
	I0929 13:05:58.837489 1359411 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 13:05:58.841298 1359411 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 13:05:58.841333 1359411 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 13:05:58.841346 1359411 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 13:05:58.841358 1359411 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 13:05:58.841371 1359411 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1097891/.minikube/addons for local assets ...
	I0929 13:05:58.841428 1359411 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1097891/.minikube/files for local assets ...
	I0929 13:05:58.841538 1359411 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem -> 11014942.pem in /etc/ssl/certs
	I0929 13:05:58.841668 1359411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 13:05:58.851083 1359411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem --> /etc/ssl/certs/11014942.pem (1708 bytes)
	I0929 13:05:58.883584 1359411 start.go:296] duration metric: took 175.242129ms for postStartSetup
	I0929 13:05:58.884035 1359411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-321209
	I0929 13:05:58.905900 1359411 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/calico-321209/config.json ...
	I0929 13:05:58.906342 1359411 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 13:05:58.906405 1359411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-321209
	I0929 13:05:58.926702 1359411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/calico-321209/id_rsa Username:docker}
	I0929 13:05:59.024007 1359411 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 13:05:59.029319 1359411 start.go:128] duration metric: took 5.804940533s to createHost
	I0929 13:05:59.029347 1359411 start.go:83] releasing machines lock for "calico-321209", held for 5.805102386s
	I0929 13:05:59.029429 1359411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-321209
	I0929 13:05:59.050936 1359411 ssh_runner.go:195] Run: cat /version.json
	I0929 13:05:59.051016 1359411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-321209
	I0929 13:05:59.051028 1359411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 13:05:59.051108 1359411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-321209
	I0929 13:05:59.071677 1359411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/calico-321209/id_rsa Username:docker}
	I0929 13:05:59.071847 1359411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/calico-321209/id_rsa Username:docker}
	I0929 13:05:59.247390 1359411 ssh_runner.go:195] Run: systemctl --version
	I0929 13:05:59.253099 1359411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 13:05:59.258708 1359411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 13:05:59.293156 1359411 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 13:05:59.293240 1359411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 13:05:59.322454 1359411 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 13:05:59.322481 1359411 start.go:495] detecting cgroup driver to use...
	I0929 13:05:59.322513 1359411 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 13:05:59.322568 1359411 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0929 13:05:59.335744 1359411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 13:05:59.348777 1359411 docker.go:218] disabling cri-docker service (if available) ...
	I0929 13:05:59.348838 1359411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 13:05:59.362973 1359411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 13:05:59.378718 1359411 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 13:05:59.444921 1359411 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 13:05:59.523928 1359411 docker.go:234] disabling docker service ...
	I0929 13:05:59.524019 1359411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 13:05:59.545853 1359411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 13:05:59.560027 1359411 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 13:05:59.635791 1359411 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 13:05:59.709011 1359411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 13:05:59.721920 1359411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 13:05:59.739507 1359411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 13:05:59.752000 1359411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 13:05:59.762526 1359411 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0929 13:05:59.762579 1359411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0929 13:05:59.774382 1359411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 13:05:59.785256 1359411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 13:05:59.795576 1359411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 13:05:59.807601 1359411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 13:05:59.817652 1359411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 13:05:59.828301 1359411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 13:05:59.839070 1359411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 13:05:59.849660 1359411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 13:05:59.858844 1359411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 13:05:59.868103 1359411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:05:59.940567 1359411 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 13:06:00.065604 1359411 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0929 13:06:00.065682 1359411 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0929 13:06:00.070060 1359411 start.go:563] Will wait 60s for crictl version
	I0929 13:06:00.070113 1359411 ssh_runner.go:195] Run: which crictl
	I0929 13:06:00.073929 1359411 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 13:06:00.113226 1359411 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0929 13:06:00.113285 1359411 ssh_runner.go:195] Run: containerd --version
	I0929 13:06:00.139276 1359411 ssh_runner.go:195] Run: containerd --version
	I0929 13:06:00.167505 1359411 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0929 13:06:00.168648 1359411 cli_runner.go:164] Run: docker network inspect calico-321209 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:06:00.185095 1359411 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0929 13:06:00.189189 1359411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:06:00.201105 1359411 kubeadm.go:875] updating cluster {Name:calico-321209 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-321209 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 13:06:00.201212 1359411 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0929 13:06:00.201257 1359411 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 13:06:00.236472 1359411 containerd.go:627] all images are preloaded for containerd runtime.
	I0929 13:06:00.236498 1359411 containerd.go:534] Images already preloaded, skipping extraction
	I0929 13:06:00.236561 1359411 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 13:06:00.271744 1359411 containerd.go:627] all images are preloaded for containerd runtime.
	I0929 13:06:00.271768 1359411 cache_images.go:85] Images are preloaded, skipping loading
	I0929 13:06:00.271778 1359411 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.34.0 containerd true true} ...
	I0929 13:06:00.271894 1359411 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-321209 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:calico-321209 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0929 13:06:00.271973 1359411 ssh_runner.go:195] Run: sudo crictl info
	I0929 13:06:00.308634 1359411 cni.go:84] Creating CNI manager for "calico"
	I0929 13:06:00.308656 1359411 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 13:06:00.308678 1359411 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-321209 NodeName:calico-321209 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 13:06:00.308798 1359411 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "calico-321209"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 13:06:00.308863 1359411 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 13:06:00.319084 1359411 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 13:06:00.319154 1359411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 13:06:00.328389 1359411 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0929 13:06:00.347987 1359411 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 13:06:00.371023 1359411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I0929 13:06:00.390042 1359411 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0929 13:06:00.393880 1359411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:06:00.405533 1359411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:06:00.472633 1359411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:06:00.506297 1359411 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/calico-321209 for IP: 192.168.76.2
	I0929 13:06:00.506317 1359411 certs.go:194] generating shared ca certs ...
	I0929 13:06:00.506333 1359411 certs.go:226] acquiring lock for ca certs: {Name:mk80f04796163f71154dbe6468cabd937b3d9c9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:06:00.506494 1359411 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.key
	I0929 13:06:00.506548 1359411 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/proxy-client-ca.key
	I0929 13:06:00.506564 1359411 certs.go:256] generating profile certs ...
	I0929 13:06:00.506631 1359411 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/calico-321209/client.key
	I0929 13:06:00.506651 1359411 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/calico-321209/client.crt with IP's: []
	I0929 13:06:00.889591 1359411 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/calico-321209/client.crt ...
	I0929 13:06:00.889625 1359411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/calico-321209/client.crt: {Name:mkf2e79e1606a7dc1bfdbf330023f98cfbe76a52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:06:00.889799 1359411 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/calico-321209/client.key ...
	I0929 13:06:00.889811 1359411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/calico-321209/client.key: {Name:mkbb33feddcbbd9a288644e6282bba0a3c26e4df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:06:00.889929 1359411 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/calico-321209/apiserver.key.bf02c4bd
	I0929 13:06:00.889959 1359411 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/calico-321209/apiserver.crt.bf02c4bd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0929 13:06:01.444145 1359411 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/calico-321209/apiserver.crt.bf02c4bd ...
	I0929 13:06:01.444178 1359411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/calico-321209/apiserver.crt.bf02c4bd: {Name:mk4675499a01cb3f51d4447a91165fb9259d5add Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:06:01.444338 1359411 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/calico-321209/apiserver.key.bf02c4bd ...
	I0929 13:06:01.444351 1359411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/calico-321209/apiserver.key.bf02c4bd: {Name:mk0de81cde113abe4a52c0cd894a09a91574f39b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:06:01.444427 1359411 certs.go:381] copying /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/calico-321209/apiserver.crt.bf02c4bd -> /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/calico-321209/apiserver.crt
	I0929 13:06:01.444521 1359411 certs.go:385] copying /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/calico-321209/apiserver.key.bf02c4bd -> /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/calico-321209/apiserver.key
	I0929 13:06:01.444587 1359411 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/calico-321209/proxy-client.key
	I0929 13:06:01.444603 1359411 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/calico-321209/proxy-client.crt with IP's: []
	I0929 13:06:02.059277 1359411 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/calico-321209/proxy-client.crt ...
	I0929 13:06:02.059315 1359411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/calico-321209/proxy-client.crt: {Name:mk1940775c483faba7a7e452fa78ea59259f1b8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:06:02.059540 1359411 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/calico-321209/proxy-client.key ...
	I0929 13:06:02.059563 1359411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/calico-321209/proxy-client.key: {Name:mk870674757246c544ca340081b37447757c1e80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:06:02.059819 1359411 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/1101494.pem (1338 bytes)
	W0929 13:06:02.059882 1359411 certs.go:480] ignoring /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/1101494_empty.pem, impossibly tiny 0 bytes
	I0929 13:06:02.059898 1359411 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 13:06:02.059929 1359411 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem (1078 bytes)
	I0929 13:06:02.059978 1359411 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem (1123 bytes)
	I0929 13:06:02.060013 1359411 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/key.pem (1679 bytes)
	I0929 13:06:02.060076 1359411 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem (1708 bytes)
	I0929 13:06:02.060927 1359411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 13:06:02.089756 1359411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I0929 13:06:02.117209 1359411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 13:06:02.145331 1359411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 13:06:02.173567 1359411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/calico-321209/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 13:06:02.200291 1359411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/calico-321209/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0929 13:06:02.227276 1359411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/calico-321209/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 13:06:02.253595 1359411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/calico-321209/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 13:06:02.280173 1359411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem --> /usr/share/ca-certificates/11014942.pem (1708 bytes)
	I0929 13:06:02.394370 1359411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 13:06:02.419386 1359411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/1101494.pem --> /usr/share/ca-certificates/1101494.pem (1338 bytes)
	I0929 13:06:02.444531 1359411 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 13:06:02.462599 1359411 ssh_runner.go:195] Run: openssl version
	I0929 13:06:02.468277 1359411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 13:06:02.478915 1359411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:06:02.482701 1359411 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 12:18 /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:06:02.482762 1359411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:06:02.489969 1359411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 13:06:02.500225 1359411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1101494.pem && ln -fs /usr/share/ca-certificates/1101494.pem /etc/ssl/certs/1101494.pem"
	I0929 13:06:02.511518 1359411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1101494.pem
	I0929 13:06:02.515808 1359411 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 12:23 /usr/share/ca-certificates/1101494.pem
	I0929 13:06:02.515869 1359411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1101494.pem
	I0929 13:06:02.523004 1359411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1101494.pem /etc/ssl/certs/51391683.0"
	I0929 13:06:02.533212 1359411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11014942.pem && ln -fs /usr/share/ca-certificates/11014942.pem /etc/ssl/certs/11014942.pem"
	I0929 13:06:02.543930 1359411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11014942.pem
	I0929 13:06:02.548661 1359411 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 12:23 /usr/share/ca-certificates/11014942.pem
	I0929 13:06:02.548714 1359411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11014942.pem
	I0929 13:06:02.557618 1359411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11014942.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 13:06:02.568224 1359411 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 13:06:02.572018 1359411 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 13:06:02.572077 1359411 kubeadm.go:392] StartCluster: {Name:calico-321209 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-321209 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:06:02.572150 1359411 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0929 13:06:02.572222 1359411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 13:06:02.611117 1359411 cri.go:89] found id: ""
	I0929 13:06:02.611185 1359411 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 13:06:02.621790 1359411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 13:06:02.633270 1359411 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0929 13:06:02.633340 1359411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 13:06:02.644768 1359411 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 13:06:02.644792 1359411 kubeadm.go:157] found existing configuration files:
	
	I0929 13:06:02.644847 1359411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 13:06:02.654826 1359411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 13:06:02.654895 1359411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 13:06:02.664246 1359411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 13:06:02.673792 1359411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 13:06:02.673859 1359411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 13:06:02.683641 1359411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 13:06:02.694189 1359411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 13:06:02.694252 1359411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 13:06:02.704312 1359411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 13:06:02.714833 1359411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 13:06:02.714897 1359411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 13:06:02.725292 1359411 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0929 13:06:02.769057 1359411 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 13:06:02.769134 1359411 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 13:06:02.877454 1359411 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0929 13:06:02.877519 1359411 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1040-gcp
	I0929 13:06:02.877549 1359411 kubeadm.go:310] OS: Linux
	I0929 13:06:02.877589 1359411 kubeadm.go:310] CGROUPS_CPU: enabled
	I0929 13:06:02.877630 1359411 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0929 13:06:02.877720 1359411 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0929 13:06:02.877812 1359411 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0929 13:06:02.877908 1359411 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0929 13:06:02.877954 1359411 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0929 13:06:02.878036 1359411 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0929 13:06:02.878103 1359411 kubeadm.go:310] CGROUPS_IO: enabled
	I0929 13:06:02.943503 1359411 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 13:06:02.943645 1359411 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 13:06:02.943843 1359411 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 13:06:02.949741 1359411 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 13:06:02.951802 1359411 out.go:252]   - Generating certificates and keys ...
	I0929 13:06:02.951916 1359411 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 13:06:02.952067 1359411 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 13:06:03.141999 1359411 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 13:06:03.198635 1359411 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 13:06:03.620031 1359411 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 13:06:03.921621 1359411 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 13:06:04.035315 1359411 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 13:06:04.035498 1359411 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [calico-321209 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0929 13:06:04.201095 1359411 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 13:06:04.201235 1359411 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [calico-321209 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0929 13:06:04.366531 1359411 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 13:06:04.641402 1359411 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 13:06:04.967672 1359411 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 13:06:04.967781 1359411 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 13:06:05.078644 1359411 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 13:06:05.214628 1359411 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 13:06:05.791798 1359411 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 13:06:06.907196 1359411 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 13:06:07.287484 1359411 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 13:06:07.287956 1359411 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 13:06:07.293668 1359411 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 13:06:07.295005 1359411 out.go:252]   - Booting up control plane ...
	I0929 13:06:07.295136 1359411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 13:06:07.295258 1359411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 13:06:07.295573 1359411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 13:06:07.307114 1359411 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 13:06:07.307232 1359411 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 13:06:07.313743 1359411 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 13:06:07.314005 1359411 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 13:06:07.314070 1359411 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 13:06:07.420774 1359411 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 13:06:07.420953 1359411 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 13:06:07.922239 1359411 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.610817ms
	I0929 13:06:07.926569 1359411 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 13:06:07.926691 1359411 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I0929 13:06:07.926826 1359411 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 13:06:07.926944 1359411 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 13:06:09.851737 1359411 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.924993152s
	I0929 13:06:10.376949 1359411 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.450337958s
	I0929 13:06:11.928311 1359411 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.001642166s
	I0929 13:06:11.939429 1359411 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 13:06:11.947685 1359411 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 13:06:11.954802 1359411 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 13:06:11.955107 1359411 kubeadm.go:310] [mark-control-plane] Marking the node calico-321209 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 13:06:11.961811 1359411 kubeadm.go:310] [bootstrap-token] Using token: jcgcys.rvwsgnmsgi4vcdh2
	I0929 13:06:11.963864 1359411 out.go:252]   - Configuring RBAC rules ...
	I0929 13:06:11.964032 1359411 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 13:06:11.966763 1359411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 13:06:11.971419 1359411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 13:06:11.973739 1359411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 13:06:11.975804 1359411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 13:06:11.978008 1359411 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 13:06:12.334429 1359411 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 13:06:12.752703 1359411 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 13:06:13.334231 1359411 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 13:06:13.335395 1359411 kubeadm.go:310] 
	I0929 13:06:13.335489 1359411 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 13:06:13.335502 1359411 kubeadm.go:310] 
	I0929 13:06:13.335611 1359411 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 13:06:13.335620 1359411 kubeadm.go:310] 
	I0929 13:06:13.335655 1359411 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 13:06:13.335750 1359411 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 13:06:13.335823 1359411 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 13:06:13.335833 1359411 kubeadm.go:310] 
	I0929 13:06:13.335907 1359411 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 13:06:13.335916 1359411 kubeadm.go:310] 
	I0929 13:06:13.336021 1359411 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 13:06:13.336041 1359411 kubeadm.go:310] 
	I0929 13:06:13.336109 1359411 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 13:06:13.336221 1359411 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 13:06:13.336329 1359411 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 13:06:13.336341 1359411 kubeadm.go:310] 
	I0929 13:06:13.336466 1359411 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 13:06:13.336578 1359411 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 13:06:13.336590 1359411 kubeadm.go:310] 
	I0929 13:06:13.336711 1359411 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jcgcys.rvwsgnmsgi4vcdh2 \
	I0929 13:06:13.336873 1359411 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e31917eb19b7c0879803010df843d835ccee1dda0a35b4d1611c13a53effe46e \
	I0929 13:06:13.336905 1359411 kubeadm.go:310] 	--control-plane 
	I0929 13:06:13.336912 1359411 kubeadm.go:310] 
	I0929 13:06:13.337065 1359411 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 13:06:13.337077 1359411 kubeadm.go:310] 
	I0929 13:06:13.337197 1359411 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jcgcys.rvwsgnmsgi4vcdh2 \
	I0929 13:06:13.337280 1359411 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e31917eb19b7c0879803010df843d835ccee1dda0a35b4d1611c13a53effe46e 
	I0929 13:06:13.340471 1359411 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1040-gcp\n", err: exit status 1
	I0929 13:06:13.340632 1359411 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 13:06:13.340660 1359411 cni.go:84] Creating CNI manager for "calico"
	I0929 13:06:13.342939 1359411 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I0929 13:06:13.345305 1359411 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0929 13:06:13.345345 1359411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (539470 bytes)
	I0929 13:06:13.367129 1359411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0929 13:06:14.205733 1359411 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 13:06:14.205898 1359411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:06:14.205943 1359411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-321209 minikube.k8s.io/updated_at=2025_09_29T13_06_14_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e minikube.k8s.io/name=calico-321209 minikube.k8s.io/primary=true
	I0929 13:06:14.287920 1359411 ops.go:34] apiserver oom_adj: -16
	I0929 13:06:14.287994 1359411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:06:14.788204 1359411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:06:15.288126 1359411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:06:15.788100 1359411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:06:16.288811 1359411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:06:16.789059 1359411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:06:17.288212 1359411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:06:17.788860 1359411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:06:18.288498 1359411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:06:18.788855 1359411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:06:18.857538 1359411 kubeadm.go:1105] duration metric: took 4.651693209s to wait for elevateKubeSystemPrivileges
	I0929 13:06:18.857571 1359411 kubeadm.go:394] duration metric: took 16.285500243s to StartCluster
	I0929 13:06:18.857590 1359411 settings.go:142] acquiring lock: {Name:mk967ab7b412f5ea13a8bdbc3d08e00d0ec4417f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:06:18.857653 1359411 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 13:06:18.858895 1359411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/kubeconfig: {Name:mk343611c88fd6ad36810bb377f9a0ca463784db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:06:18.859183 1359411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 13:06:18.859186 1359411 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0929 13:06:18.859288 1359411 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 13:06:18.859393 1359411 addons.go:69] Setting storage-provisioner=true in profile "calico-321209"
	I0929 13:06:18.859410 1359411 addons.go:69] Setting default-storageclass=true in profile "calico-321209"
	I0929 13:06:18.859424 1359411 addons.go:238] Setting addon storage-provisioner=true in "calico-321209"
	I0929 13:06:18.859424 1359411 config.go:182] Loaded profile config "calico-321209": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 13:06:18.859439 1359411 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-321209"
	I0929 13:06:18.859461 1359411 host.go:66] Checking if "calico-321209" exists ...
	I0929 13:06:18.859837 1359411 cli_runner.go:164] Run: docker container inspect calico-321209 --format={{.State.Status}}
	I0929 13:06:18.859914 1359411 cli_runner.go:164] Run: docker container inspect calico-321209 --format={{.State.Status}}
	I0929 13:06:18.860758 1359411 out.go:179] * Verifying Kubernetes components...
	I0929 13:06:18.861890 1359411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:06:18.885628 1359411 addons.go:238] Setting addon default-storageclass=true in "calico-321209"
	I0929 13:06:18.885681 1359411 host.go:66] Checking if "calico-321209" exists ...
	I0929 13:06:18.886171 1359411 cli_runner.go:164] Run: docker container inspect calico-321209 --format={{.State.Status}}
	I0929 13:06:18.888078 1359411 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 13:06:18.890510 1359411 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:06:18.890532 1359411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 13:06:18.890595 1359411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-321209
	I0929 13:06:18.920184 1359411 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 13:06:18.920209 1359411 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 13:06:18.920532 1359411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-321209
	I0929 13:06:18.921675 1359411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/calico-321209/id_rsa Username:docker}
	I0929 13:06:18.948683 1359411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/calico-321209/id_rsa Username:docker}
	I0929 13:06:18.959271 1359411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 13:06:19.051084 1359411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:06:19.063510 1359411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:06:19.078288 1359411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 13:06:19.193695 1359411 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0929 13:06:19.195479 1359411 node_ready.go:35] waiting up to 15m0s for node "calico-321209" to be "Ready" ...
	I0929 13:06:19.206931 1359411 node_ready.go:49] node "calico-321209" is "Ready"
	I0929 13:06:19.206979 1359411 node_ready.go:38] duration metric: took 11.471007ms for node "calico-321209" to be "Ready" ...
	I0929 13:06:19.206999 1359411 api_server.go:52] waiting for apiserver process to appear ...
	I0929 13:06:19.207198 1359411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 13:06:19.418321 1359411 api_server.go:72] duration metric: took 559.090396ms to wait for apiserver process to appear ...
	I0929 13:06:19.418349 1359411 api_server.go:88] waiting for apiserver healthz status ...
	I0929 13:06:19.418374 1359411 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0929 13:06:19.424481 1359411 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0929 13:06:19.424947 1359411 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0929 13:06:19.425329 1359411 api_server.go:141] control plane version: v1.34.0
	I0929 13:06:19.425371 1359411 api_server.go:131] duration metric: took 7.013767ms to wait for apiserver health ...
	I0929 13:06:19.425383 1359411 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 13:06:19.426593 1359411 addons.go:514] duration metric: took 567.31712ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0929 13:06:19.428773 1359411 system_pods.go:59] 10 kube-system pods found
	I0929 13:06:19.428801 1359411 system_pods.go:61] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:06:19.428810 1359411 system_pods.go:61] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:06:19.428828 1359411 system_pods.go:61] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:06:19.428834 1359411 system_pods.go:61] "coredns-66bc5c9577-qb9zz" [60a5199a-df11-43a8-bf35-82ab24a1d2a6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:06:19.428839 1359411 system_pods.go:61] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:06:19.428845 1359411 system_pods.go:61] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:06:19.428850 1359411 system_pods.go:61] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:06:19.428857 1359411 system_pods.go:61] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 13:06:19.428867 1359411 system_pods.go:61] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:06:19.428876 1359411 system_pods.go:61] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 13:06:19.428882 1359411 system_pods.go:74] duration metric: took 3.48781ms to wait for pod list to return data ...
	I0929 13:06:19.428891 1359411 default_sa.go:34] waiting for default service account to be created ...
	I0929 13:06:19.430948 1359411 default_sa.go:45] found service account: "default"
	I0929 13:06:19.430979 1359411 default_sa.go:55] duration metric: took 2.080901ms for default service account to be created ...
	I0929 13:06:19.430989 1359411 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 13:06:19.433799 1359411 system_pods.go:86] 10 kube-system pods found
	I0929 13:06:19.433832 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:06:19.433845 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:06:19.433862 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:06:19.433875 1359411 system_pods.go:89] "coredns-66bc5c9577-qb9zz" [60a5199a-df11-43a8-bf35-82ab24a1d2a6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:06:19.433883 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:06:19.433895 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:06:19.433909 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:06:19.433917 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 13:06:19.433925 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:06:19.433937 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 13:06:19.434018 1359411 retry.go:31] will retry after 267.160802ms: missing components: kube-dns, kube-proxy
	I0929 13:06:19.699400 1359411 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-321209" context rescaled to 1 replicas
	I0929 13:06:19.705018 1359411 system_pods.go:86] 10 kube-system pods found
	I0929 13:06:19.705051 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:06:19.705066 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:06:19.705081 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:06:19.705091 1359411 system_pods.go:89] "coredns-66bc5c9577-qb9zz" [60a5199a-df11-43a8-bf35-82ab24a1d2a6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:06:19.705110 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:06:19.705124 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:06:19.705137 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:06:19.705146 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:06:19.705155 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:06:19.705168 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 13:06:19.705189 1359411 retry.go:31] will retry after 307.187717ms: missing components: kube-dns
	I0929 13:06:20.017892 1359411 system_pods.go:86] 10 kube-system pods found
	I0929 13:06:20.017935 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:06:20.017948 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:06:20.017973 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:06:20.017982 1359411 system_pods.go:89] "coredns-66bc5c9577-qb9zz" [60a5199a-df11-43a8-bf35-82ab24a1d2a6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:06:20.017990 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:06:20.017999 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:06:20.018007 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:06:20.018013 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:06:20.018021 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:06:20.018028 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 13:06:20.018048 1359411 retry.go:31] will retry after 421.546363ms: missing components: kube-dns
	I0929 13:06:20.444764 1359411 system_pods.go:86] 10 kube-system pods found
	I0929 13:06:20.444802 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:06:20.444816 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:06:20.444827 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:06:20.444836 1359411 system_pods.go:89] "coredns-66bc5c9577-qb9zz" [60a5199a-df11-43a8-bf35-82ab24a1d2a6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:06:20.444851 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:06:20.444866 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:06:20.444880 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:06:20.444885 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:06:20.444893 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:06:20.444902 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 13:06:20.444919 1359411 retry.go:31] will retry after 452.236531ms: missing components: kube-dns
	I0929 13:06:20.902179 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:06:20.902223 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:06:20.902238 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:06:20.902253 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:06:20.902263 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:06:20.902273 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:06:20.902286 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:06:20.902293 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:06:20.902305 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:06:20.902311 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:06:20.902332 1359411 retry.go:31] will retry after 694.748776ms: missing components: kube-dns
	I0929 13:06:21.602602 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:06:21.602643 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:06:21.602655 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:06:21.602669 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:06:21.602681 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:06:21.602690 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:06:21.602713 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:06:21.602722 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:06:21.602734 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:06:21.602742 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:06:21.602763 1359411 retry.go:31] will retry after 839.728497ms: missing components: kube-dns
	I0929 13:06:22.447405 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:06:22.447443 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:06:22.447457 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:06:22.447471 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:06:22.447477 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:06:22.447483 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:06:22.447494 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:06:22.447500 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:06:22.447506 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:06:22.447509 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:06:22.447524 1359411 retry.go:31] will retry after 803.946667ms: missing components: kube-dns
	I0929 13:06:23.257030 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:06:23.257070 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:06:23.257083 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:06:23.257092 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:06:23.257101 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:06:23.257110 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:06:23.257121 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:06:23.257126 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:06:23.257133 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:06:23.257138 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:06:23.257158 1359411 retry.go:31] will retry after 1.381235568s: missing components: kube-dns
	I0929 13:06:24.654547 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:06:24.654597 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:06:24.654612 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:06:24.654627 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:06:24.654637 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:06:24.654646 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:06:24.654655 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:06:24.654673 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:06:24.654682 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:06:24.654687 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:06:24.654707 1359411 retry.go:31] will retry after 1.309571081s: missing components: kube-dns
	I0929 13:06:25.968064 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:06:25.968096 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:06:25.968107 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:06:25.968114 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:06:25.968119 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:06:25.968124 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:06:25.968129 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:06:25.968133 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:06:25.968137 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:06:25.968140 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:06:25.968155 1359411 retry.go:31] will retry after 1.876819335s: missing components: kube-dns
	I0929 13:06:27.849493 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:06:27.849534 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:06:27.849546 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:06:27.849556 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:06:27.849565 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:06:27.849576 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:06:27.849583 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:06:27.849588 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:06:27.849593 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:06:27.849598 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:06:27.849622 1359411 retry.go:31] will retry after 2.087125068s: missing components: kube-dns
	I0929 13:06:29.940431 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:06:29.940463 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:06:29.940472 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:06:29.940479 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:06:29.940483 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:06:29.940488 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:06:29.940491 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:06:29.940497 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:06:29.940500 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:06:29.940503 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:06:29.940518 1359411 retry.go:31] will retry after 3.39928565s: missing components: kube-dns
	I0929 13:06:33.344685 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:06:33.344723 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:06:33.344736 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:06:33.344744 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:06:33.344748 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:06:33.344756 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:06:33.344760 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:06:33.344764 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:06:33.344767 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:06:33.344771 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:06:33.344787 1359411 retry.go:31] will retry after 3.532043951s: missing components: kube-dns
	I0929 13:06:36.883887 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:06:36.883926 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:06:36.883936 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:06:36.883943 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:06:36.883948 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:06:36.883952 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:06:36.883956 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:06:36.883974 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:06:36.883981 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:06:36.883986 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:06:36.884003 1359411 retry.go:31] will retry after 5.162118728s: missing components: kube-dns
	I0929 13:06:42.052848 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:06:42.052882 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:06:42.052893 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:06:42.052900 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:06:42.052906 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:06:42.052910 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:06:42.052913 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:06:42.052919 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:06:42.052922 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:06:42.052925 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:06:42.052940 1359411 retry.go:31] will retry after 6.676879077s: missing components: kube-dns
	I0929 13:06:48.735073 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:06:48.735109 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:06:48.735118 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:06:48.735135 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:06:48.735139 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:06:48.735144 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:06:48.735148 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:06:48.735157 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:06:48.735160 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:06:48.735164 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:06:48.735181 1359411 retry.go:31] will retry after 6.885306686s: missing components: kube-dns
	I0929 13:06:55.626729 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:06:55.626771 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:06:55.626784 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:06:55.626799 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:06:55.626808 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:06:55.626828 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:06:55.626837 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:06:55.626846 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:06:55.626852 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:06:55.626860 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:06:55.626879 1359411 retry.go:31] will retry after 9.421054584s: missing components: kube-dns
	I0929 13:07:05.054618 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:07:05.054657 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:07:05.054669 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:07:05.054679 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:07:05.054687 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:07:05.054700 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:07:05.054706 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:07:05.054714 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:07:05.054719 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:07:05.054723 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:07:05.054743 1359411 retry.go:31] will retry after 11.637512215s: missing components: kube-dns
	I0929 13:07:16.699517 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:07:16.699553 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:07:16.699562 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:07:16.699569 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:07:16.699573 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:07:16.699578 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:07:16.699581 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:07:16.699585 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:07:16.699588 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:07:16.699592 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:07:16.699608 1359411 retry.go:31] will retry after 16.693122003s: missing components: kube-dns
	I0929 13:07:33.397132 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:07:33.397168 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:07:33.397183 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:07:33.397192 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:07:33.397198 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:07:33.397210 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:07:33.397218 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:07:33.397224 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:07:33.397231 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:07:33.397236 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:07:33.397256 1359411 retry.go:31] will retry after 13.232597405s: missing components: kube-dns
	I0929 13:07:46.637133 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:07:46.637171 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:07:46.637182 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:07:46.637192 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:07:46.637197 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:07:46.637203 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:07:46.637209 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:07:46.637214 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:07:46.637217 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:07:46.637220 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:07:46.637237 1359411 retry.go:31] will retry after 22.289841378s: missing components: kube-dns
	I0929 13:08:08.931654 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:08:08.931694 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:08:08.931703 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:08:08.931710 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:08:08.931714 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:08:08.931725 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:08:08.931729 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:08:08.931734 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:08:08.931739 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:08:08.931744 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:08:08.931765 1359411 retry.go:31] will retry after 21.845858985s: missing components: kube-dns
	I0929 13:08:30.783286 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:08:30.783328 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:08:30.783345 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:08:30.783359 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:08:30.783370 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:08:30.783381 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:08:30.783390 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:08:30.783399 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:08:30.783406 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:08:30.783412 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:08:30.783434 1359411 retry.go:31] will retry after 35.004080932s: missing components: kube-dns
	I0929 13:09:05.793123 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:09:05.793161 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:09:05.793173 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:09:05.793191 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:09:05.793201 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:09:05.793208 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:09:05.793213 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:09:05.793223 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:09:05.793228 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:09:05.793233 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:09:05.793248 1359411 retry.go:31] will retry after 51.308654599s: missing components: kube-dns
	I0929 13:09:57.109518 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:09:57.109556 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:09:57.109565 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:09:57.109572 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:09:57.109577 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:09:57.109582 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:09:57.109585 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:09:57.109591 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:09:57.109620 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:09:57.109627 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:09:57.109644 1359411 retry.go:31] will retry after 52.987897106s: missing components: kube-dns
	I0929 13:10:50.103007 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:10:50.103057 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:10:50.103075 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:10:50.103091 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:10:50.103100 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:10:50.103107 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:10:50.103115 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:10:50.103122 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:10:50.103130 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:10:50.103135 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:10:50.103156 1359411 retry.go:31] will retry after 45.832047842s: missing components: kube-dns
	I0929 13:11:35.940622 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:11:35.940666 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:11:35.940679 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:11:35.940691 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:11:35.940697 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:11:35.940703 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:11:35.940709 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:11:35.940717 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:11:35.940726 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:11:35.940732 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:11:35.940756 1359411 retry.go:31] will retry after 45.593833894s: missing components: kube-dns
	I0929 13:12:21.540022 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:12:21.540068 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:12:21.540078 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:12:21.540088 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:12:21.540096 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:12:21.540102 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:12:21.540108 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:12:21.540112 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:12:21.540117 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:12:21.540120 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:12:21.540139 1359411 retry.go:31] will retry after 1m5.22199495s: missing components: kube-dns
	I0929 13:13:26.769357 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:13:26.769402 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:13:26.769415 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:13:26.769424 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:13:26.769428 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:13:26.769432 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:13:26.769438 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:13:26.769442 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:13:26.769446 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:13:26.769449 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:13:26.769467 1359411 retry.go:31] will retry after 1m13.959390534s: missing components: kube-dns
	I0929 13:14:40.733869 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:14:40.733915 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:14:40.733926 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:14:40.733934 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:14:40.733937 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:14:40.733942 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:14:40.733946 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:14:40.733951 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:14:40.733954 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:14:40.733958 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:14:40.734002 1359411 retry.go:31] will retry after 1m13.688928173s: missing components: kube-dns
	I0929 13:15:54.426567 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:15:54.426609 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:15:54.426619 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:15:54.426627 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:15:54.426631 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:15:54.426635 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:15:54.426639 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:15:54.426644 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:15:54.426647 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:15:54.426650 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:15:54.426671 1359411 retry.go:31] will retry after 53.851303252s: missing components: kube-dns
	I0929 13:16:48.282415 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:16:48.282459 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:16:48.282471 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:16:48.282478 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:16:48.282481 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:16:48.282486 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:16:48.282489 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:16:48.282493 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:16:48.282496 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:16:48.282499 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:16:48.282516 1359411 retry.go:31] will retry after 1m1.774490631s: missing components: kube-dns
	I0929 13:17:50.062266 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:17:50.062389 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:17:50.062405 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:17:50.062415 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:17:50.062422 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:17:50.062432 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:17:50.062438 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:17:50.062447 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:17:50.062453 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:17:50.062460 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:17:50.062481 1359411 retry.go:31] will retry after 1m8.677032828s: missing components: kube-dns
	I0929 13:18:58.743760 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:18:58.743806 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:18:58.743817 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:18:58.743825 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:18:58.743832 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:18:58.743840 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:18:58.743846 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:18:58.743853 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:18:58.743858 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:18:58.743862 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:18:58.743885 1359411 retry.go:31] will retry after 1m8.264714311s: missing components: kube-dns
	I0929 13:20:07.016096 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:20:07.016139 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:20:07.016154 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:20:07.016163 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:20:07.016166 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:20:07.016171 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:20:07.016174 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:20:07.016180 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:20:07.016183 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:20:07.016186 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:20:07.016207 1359411 retry.go:31] will retry after 1m6.454135724s: missing components: kube-dns
	I0929 13:21:13.475048 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:21:13.475102 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:21:13.475120 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:21:13.475131 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:21:13.475138 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:21:13.475145 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:21:13.475151 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:21:13.475159 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:21:13.475165 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:21:13.475173 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:21:13.476915 1359411 out.go:203] 
	W0929 13:21:13.478119 1359411 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	W0929 13:21:13.478137 1359411 out.go:285] * 
	* 
	W0929 13:21:13.479778 1359411 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0929 13:21:13.480702 1359411 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (920.55s)
E0929 13:28:40.996017 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/bridge-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-r4kbj" [60e7b4a4-451c-4c51-ae68-8a626ab1e1a7] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-495121 -n old-k8s-version-495121
start_stop_delete_test.go:272: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-29 13:19:07.194775557 +0000 UTC m=+3711.778798748
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context old-k8s-version-495121 describe po kubernetes-dashboard-8694d4445c-r4kbj -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context old-k8s-version-495121 describe po kubernetes-dashboard-8694d4445c-r4kbj -n kubernetes-dashboard:
Name:             kubernetes-dashboard-8694d4445c-r4kbj
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             old-k8s-version-495121/192.168.85.2
Start Time:       Mon, 29 Sep 2025 13:09:45 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=8694d4445c
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-8694d4445c
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bc8ds (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-bc8ds:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m21s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-8694d4445c-r4kbj to old-k8s-version-495121
Normal   Pulling    7m38s (x4 over 9m22s)   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     7m35s (x4 over 9m16s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m35s (x4 over 9m16s)   kubelet            Error: ErrImagePull
Warning  Failed     7m24s (x6 over 9m15s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    4m13s (x19 over 9m15s)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context old-k8s-version-495121 logs kubernetes-dashboard-8694d4445c-r4kbj -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context old-k8s-version-495121 logs kubernetes-dashboard-8694d4445c-r4kbj -n kubernetes-dashboard: exit status 1 (76.597972ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-8694d4445c-r4kbj" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context old-k8s-version-495121 logs kubernetes-dashboard-8694d4445c-r4kbj -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-495121
helpers_test.go:243: (dbg) docker inspect old-k8s-version-495121:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a1aa9630160311ea6a5f163f3947f826be23ccae30cd89f5dd8458be05c8d52e",
	        "Created": "2025-09-29T13:08:13.162993854Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1421162,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T13:09:23.617275974Z",
	            "FinishedAt": "2025-09-29T13:09:22.71435807Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/a1aa9630160311ea6a5f163f3947f826be23ccae30cd89f5dd8458be05c8d52e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a1aa9630160311ea6a5f163f3947f826be23ccae30cd89f5dd8458be05c8d52e/hostname",
	        "HostsPath": "/var/lib/docker/containers/a1aa9630160311ea6a5f163f3947f826be23ccae30cd89f5dd8458be05c8d52e/hosts",
	        "LogPath": "/var/lib/docker/containers/a1aa9630160311ea6a5f163f3947f826be23ccae30cd89f5dd8458be05c8d52e/a1aa9630160311ea6a5f163f3947f826be23ccae30cd89f5dd8458be05c8d52e-json.log",
	        "Name": "/old-k8s-version-495121",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-495121:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-495121",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a1aa9630160311ea6a5f163f3947f826be23ccae30cd89f5dd8458be05c8d52e",
	                "LowerDir": "/var/lib/docker/overlay2/05eef8c3290607aa741d1676ed15445122e749f396ed59979ed0d88075f40511-init/diff:/var/lib/docker/overlay2/fbd0ff8837aea1062458ef3b6c2ff01f7caaf77470820d108a1f7ca188c98aa7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/05eef8c3290607aa741d1676ed15445122e749f396ed59979ed0d88075f40511/merged",
	                "UpperDir": "/var/lib/docker/overlay2/05eef8c3290607aa741d1676ed15445122e749f396ed59979ed0d88075f40511/diff",
	                "WorkDir": "/var/lib/docker/overlay2/05eef8c3290607aa741d1676ed15445122e749f396ed59979ed0d88075f40511/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-495121",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-495121/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-495121",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-495121",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-495121",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2446c71494f7c1548bf18be8378818cd7d1090004635cda03b07522479e6cd25",
	            "SandboxKey": "/var/run/docker/netns/2446c71494f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33601"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33602"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33605"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33603"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33604"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-495121": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:fd:6d:f2:f2:9a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b8c81c3a8f3b9196bcf906d745c96c3d5c02bfac2fc5ce07f5699ae04f8992ce",
	                    "EndpointID": "2131440c530d6685d8947b302d338a83dd2d78d163419d31ac86388c624a7c9a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-495121",
	                        "a1aa96301603"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-495121 -n old-k8s-version-495121
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-495121 logs -n 25
E0929 13:19:08.698525 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/bridge-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-495121 logs -n 25: (1.57408575s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ ssh     │ -p bridge-321209 sudo cri-dockerd --version                                                                                                                                                                                                         │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ ssh     │ -p bridge-321209 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ ssh     │ -p bridge-321209 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ ssh     │ -p bridge-321209 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ ssh     │ -p bridge-321209 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ ssh     │ -p bridge-321209 sudo containerd config dump                                                                                                                                                                                                        │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ ssh     │ -p bridge-321209 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │                     │
	│ ssh     │ -p bridge-321209 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ ssh     │ -p bridge-321209 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ ssh     │ -p bridge-321209 sudo crio config                                                                                                                                                                                                                   │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ delete  │ -p bridge-321209                                                                                                                                                                                                                                    │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ delete  │ -p disable-driver-mounts-849793                                                                                                                                                                                                                     │ disable-driver-mounts-849793 │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-495121 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-495121       │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ start   │ -p no-preload-554589 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                       │ no-preload-554589            │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:10 UTC │
	│ stop    │ -p old-k8s-version-495121 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-495121       │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-495121 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-495121       │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ start   │ -p old-k8s-version-495121 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-495121       │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:10 UTC │
	│ addons  │ enable metrics-server -p embed-certs-644246 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-644246           │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ stop    │ -p embed-certs-644246 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-644246           │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ addons  │ enable dashboard -p embed-certs-644246 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-644246           │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ start   │ -p embed-certs-644246 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                        │ embed-certs-644246           │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:10 UTC │
	│ addons  │ enable metrics-server -p no-preload-554589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-554589            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ stop    │ -p no-preload-554589 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-554589            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ addons  │ enable dashboard -p no-preload-554589 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-554589            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ start   │ -p no-preload-554589 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                       │ no-preload-554589            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:11 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 13:10:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 13:10:35.887390 1430964 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:10:35.887528 1430964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:10:35.887538 1430964 out.go:374] Setting ErrFile to fd 2...
	I0929 13:10:35.887543 1430964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:10:35.887766 1430964 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1097891/.minikube/bin
	I0929 13:10:35.888286 1430964 out.go:368] Setting JSON to false
	I0929 13:10:35.889692 1430964 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":21173,"bootTime":1759130263,"procs":333,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 13:10:35.889804 1430964 start.go:140] virtualization: kvm guest
	I0929 13:10:35.892010 1430964 out.go:179] * [no-preload-554589] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 13:10:35.893293 1430964 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 13:10:35.893300 1430964 notify.go:220] Checking for updates...
	I0929 13:10:35.895737 1430964 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 13:10:35.896838 1430964 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 13:10:35.897902 1430964 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1097891/.minikube
	I0929 13:10:35.898915 1430964 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 13:10:35.899947 1430964 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 13:10:35.901594 1430964 config.go:182] Loaded profile config "no-preload-554589": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 13:10:35.902157 1430964 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 13:10:35.926890 1430964 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 13:10:35.926997 1430964 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:10:35.983850 1430964 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 13:10:35.973663238 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:10:35.983999 1430964 docker.go:318] overlay module found
	I0929 13:10:35.986231 1430964 out.go:179] * Using the docker driver based on existing profile
	I0929 13:10:35.987170 1430964 start.go:304] selected driver: docker
	I0929 13:10:35.987184 1430964 start.go:924] validating driver "docker" against &{Name:no-preload-554589 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-554589 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:10:35.987271 1430964 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 13:10:35.987858 1430964 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:10:36.048316 1430964 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 13:10:36.037327075 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:10:36.048601 1430964 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:10:36.048632 1430964 cni.go:84] Creating CNI manager for ""
	I0929 13:10:36.048678 1430964 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0929 13:10:36.048716 1430964 start.go:348] cluster config:
	{Name:no-preload-554589 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-554589 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:10:36.050290 1430964 out.go:179] * Starting "no-preload-554589" primary control-plane node in "no-preload-554589" cluster
	I0929 13:10:36.051338 1430964 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0929 13:10:36.052310 1430964 out.go:179] * Pulling base image v0.0.48 ...
	I0929 13:10:36.053168 1430964 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0929 13:10:36.053271 1430964 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 13:10:36.053310 1430964 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589/config.json ...
	I0929 13:10:36.053485 1430964 cache.go:107] acquiring lock: {Name:mk0a24f1bf5eff836d398ee592530f35f71c0ee4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:10:36.053482 1430964 cache.go:107] acquiring lock: {Name:mk71aec952ee722ffcd940a39d5e958f64a61352 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:10:36.053585 1430964 cache.go:107] acquiring lock: {Name:mk34c1dbc7ce4b55aef58920d74b57fccb4f6138 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:10:36.053579 1430964 cache.go:107] acquiring lock: {Name:mke82396d3d70feba1e14470b5460d60995ab461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:10:36.053623 1430964 cache.go:115] /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0929 13:10:36.053595 1430964 cache.go:107] acquiring lock: {Name:mkbf689face8cd4cbe1088f8d16d264b311f5a05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:10:36.053636 1430964 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 175.418µs
	I0929 13:10:36.053653 1430964 cache.go:115] /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I0929 13:10:36.053655 1430964 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0929 13:10:36.053662 1430964 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 80.179µs
	I0929 13:10:36.053671 1430964 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I0929 13:10:36.053681 1430964 cache.go:107] acquiring lock: {Name:mk3476c105048b10b0947812a968956108eab0e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:10:36.053739 1430964 cache.go:115] /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I0929 13:10:36.053734 1430964 cache.go:107] acquiring lock: {Name:mka7f06997e7f1d40489000070294d8bfac768af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:10:36.053755 1430964 cache.go:115] /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 exists
	I0929 13:10:36.053752 1430964 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 234.346µs
	I0929 13:10:36.053771 1430964 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I0929 13:10:36.053770 1430964 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.0" -> "/home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0" took 233.678µs
	I0929 13:10:36.053804 1430964 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.0 -> /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 succeeded
	I0929 13:10:36.053720 1430964 cache.go:115] /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 exists
	I0929 13:10:36.053827 1430964 cache.go:115] /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 exists
	I0929 13:10:36.053833 1430964 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.0" -> "/home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0" took 365.093µs
	I0929 13:10:36.053851 1430964 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.0 -> /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 succeeded
	I0929 13:10:36.053850 1430964 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.0" -> "/home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0" took 238.143µs
	I0929 13:10:36.053859 1430964 cache.go:115] /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I0929 13:10:36.053879 1430964 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 191.128µs
	I0929 13:10:36.053891 1430964 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I0929 13:10:36.053864 1430964 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.0 -> /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 succeeded
	I0929 13:10:36.053591 1430964 cache.go:107] acquiring lock: {Name:mk385a135f933810a76b1272dffaf4891eef10f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:10:36.054019 1430964 cache.go:115] /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 exists
	I0929 13:10:36.054027 1430964 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.0" -> "/home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0" took 443.615µs
	I0929 13:10:36.054035 1430964 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.0 -> /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 succeeded
	I0929 13:10:36.054043 1430964 cache.go:87] Successfully saved all images to host disk.
	I0929 13:10:36.075042 1430964 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 13:10:36.075061 1430964 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 13:10:36.075077 1430964 cache.go:232] Successfully downloaded all kic artifacts
	I0929 13:10:36.075108 1430964 start.go:360] acquireMachinesLock for no-preload-554589: {Name:mk5ff8f08413e283845bfb46ae253fb42cbb2a51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:10:36.075172 1430964 start.go:364] duration metric: took 44.583µs to acquireMachinesLock for "no-preload-554589"
	I0929 13:10:36.075206 1430964 start.go:96] Skipping create...Using existing machine configuration
	I0929 13:10:36.075218 1430964 fix.go:54] fixHost starting: 
	I0929 13:10:36.075468 1430964 cli_runner.go:164] Run: docker container inspect no-preload-554589 --format={{.State.Status}}
	I0929 13:10:36.094782 1430964 fix.go:112] recreateIfNeeded on no-preload-554589: state=Stopped err=<nil>
	W0929 13:10:36.094818 1430964 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 13:10:36.096594 1430964 out.go:252] * Restarting existing docker container for "no-preload-554589" ...
	I0929 13:10:36.096656 1430964 cli_runner.go:164] Run: docker start no-preload-554589
	I0929 13:10:36.348329 1430964 cli_runner.go:164] Run: docker container inspect no-preload-554589 --format={{.State.Status}}
	I0929 13:10:36.367780 1430964 kic.go:430] container "no-preload-554589" state is running.
	I0929 13:10:36.368218 1430964 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-554589
	I0929 13:10:36.387825 1430964 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589/config.json ...
	I0929 13:10:36.388091 1430964 machine.go:93] provisionDockerMachine start ...
	I0929 13:10:36.388191 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:36.407360 1430964 main.go:141] libmachine: Using SSH client type: native
	I0929 13:10:36.407692 1430964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33611 <nil> <nil>}
	I0929 13:10:36.407711 1430964 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 13:10:36.408408 1430964 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40280->127.0.0.1:33611: read: connection reset by peer
	I0929 13:10:39.547089 1430964 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-554589
	
	I0929 13:10:39.547121 1430964 ubuntu.go:182] provisioning hostname "no-preload-554589"
	I0929 13:10:39.547190 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:39.564551 1430964 main.go:141] libmachine: Using SSH client type: native
	I0929 13:10:39.564843 1430964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33611 <nil> <nil>}
	I0929 13:10:39.564862 1430964 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-554589 && echo "no-preload-554589" | sudo tee /etc/hostname
	I0929 13:10:39.715451 1430964 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-554589
	
	I0929 13:10:39.715532 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:39.733400 1430964 main.go:141] libmachine: Using SSH client type: native
	I0929 13:10:39.733671 1430964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33611 <nil> <nil>}
	I0929 13:10:39.733690 1430964 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-554589' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-554589/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-554589' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 13:10:39.872701 1430964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 13:10:39.872728 1430964 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-1097891/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-1097891/.minikube}
	I0929 13:10:39.872749 1430964 ubuntu.go:190] setting up certificates
	I0929 13:10:39.872759 1430964 provision.go:84] configureAuth start
	I0929 13:10:39.872813 1430964 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-554589
	I0929 13:10:39.891390 1430964 provision.go:143] copyHostCerts
	I0929 13:10:39.891464 1430964 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.pem, removing ...
	I0929 13:10:39.891484 1430964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.pem
	I0929 13:10:39.891561 1430964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.pem (1078 bytes)
	I0929 13:10:39.891693 1430964 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1097891/.minikube/cert.pem, removing ...
	I0929 13:10:39.891709 1430964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1097891/.minikube/cert.pem
	I0929 13:10:39.891752 1430964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-1097891/.minikube/cert.pem (1123 bytes)
	I0929 13:10:39.891910 1430964 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1097891/.minikube/key.pem, removing ...
	I0929 13:10:39.891923 1430964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1097891/.minikube/key.pem
	I0929 13:10:39.891972 1430964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-1097891/.minikube/key.pem (1679 bytes)
	I0929 13:10:39.892068 1430964 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca-key.pem org=jenkins.no-preload-554589 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-554589]
	I0929 13:10:39.939438 1430964 provision.go:177] copyRemoteCerts
	I0929 13:10:39.939504 1430964 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 13:10:39.939548 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:39.956799 1430964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33611 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/no-preload-554589/id_rsa Username:docker}
	I0929 13:10:40.055067 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 13:10:40.080134 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0929 13:10:40.104611 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 13:10:40.129350 1430964 provision.go:87] duration metric: took 256.573931ms to configureAuth
	I0929 13:10:40.129378 1430964 ubuntu.go:206] setting minikube options for container-runtime
	I0929 13:10:40.129599 1430964 config.go:182] Loaded profile config "no-preload-554589": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 13:10:40.129612 1430964 machine.go:96] duration metric: took 3.741506785s to provisionDockerMachine
	I0929 13:10:40.129622 1430964 start.go:293] postStartSetup for "no-preload-554589" (driver="docker")
	I0929 13:10:40.129637 1430964 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 13:10:40.129690 1430964 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 13:10:40.129756 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:40.147536 1430964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33611 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/no-preload-554589/id_rsa Username:docker}
	I0929 13:10:40.246335 1430964 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 13:10:40.249785 1430964 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 13:10:40.249812 1430964 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 13:10:40.249819 1430964 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 13:10:40.249826 1430964 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 13:10:40.249835 1430964 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1097891/.minikube/addons for local assets ...
	I0929 13:10:40.249880 1430964 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1097891/.minikube/files for local assets ...
	I0929 13:10:40.249948 1430964 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem -> 11014942.pem in /etc/ssl/certs
	I0929 13:10:40.250070 1430964 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 13:10:40.259126 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem --> /etc/ssl/certs/11014942.pem (1708 bytes)
	I0929 13:10:40.284860 1430964 start.go:296] duration metric: took 155.217314ms for postStartSetup
	I0929 13:10:40.284948 1430964 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 13:10:40.285044 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:40.302550 1430964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33611 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/no-preload-554589/id_rsa Username:docker}
	I0929 13:10:40.396065 1430964 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 13:10:40.400658 1430964 fix.go:56] duration metric: took 4.325395629s for fixHost
	I0929 13:10:40.400685 1430964 start.go:83] releasing machines lock for "no-preload-554589", held for 4.325500319s
	I0929 13:10:40.400745 1430964 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-554589
	I0929 13:10:40.419253 1430964 ssh_runner.go:195] Run: cat /version.json
	I0929 13:10:40.419302 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:40.419316 1430964 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 13:10:40.419372 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:40.437334 1430964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33611 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/no-preload-554589/id_rsa Username:docker}
	I0929 13:10:40.437565 1430964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33611 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/no-preload-554589/id_rsa Username:docker}
	I0929 13:10:40.530040 1430964 ssh_runner.go:195] Run: systemctl --version
	I0929 13:10:40.618702 1430964 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 13:10:40.623606 1430964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 13:10:40.643627 1430964 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 13:10:40.643704 1430964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 13:10:40.655028 1430964 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 13:10:40.655056 1430964 start.go:495] detecting cgroup driver to use...
	I0929 13:10:40.655090 1430964 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 13:10:40.655143 1430964 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0929 13:10:40.669887 1430964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 13:10:40.682685 1430964 docker.go:218] disabling cri-docker service (if available) ...
	I0929 13:10:40.682743 1430964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 13:10:40.697781 1430964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 13:10:40.710870 1430964 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 13:10:40.781641 1430964 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 13:10:40.850419 1430964 docker.go:234] disabling docker service ...
	I0929 13:10:40.850476 1430964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 13:10:40.864573 1430964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 13:10:40.877583 1430964 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 13:10:40.947404 1430964 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 13:10:41.013464 1430964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 13:10:41.025589 1430964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 13:10:41.043594 1430964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 13:10:41.054426 1430964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 13:10:41.064879 1430964 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0929 13:10:41.064945 1430964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0929 13:10:41.075614 1430964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 13:10:41.085902 1430964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 13:10:41.096231 1430964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 13:10:41.106375 1430964 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 13:10:41.116101 1430964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 13:10:41.126585 1430964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 13:10:41.136683 1430964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 13:10:41.147471 1430964 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 13:10:41.156376 1430964 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 13:10:41.164882 1430964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:10:41.232125 1430964 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 13:10:41.336741 1430964 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0929 13:10:41.336815 1430964 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0929 13:10:41.341097 1430964 start.go:563] Will wait 60s for crictl version
	I0929 13:10:41.341150 1430964 ssh_runner.go:195] Run: which crictl
	I0929 13:10:41.344984 1430964 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 13:10:41.381858 1430964 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0929 13:10:41.381934 1430964 ssh_runner.go:195] Run: containerd --version
	I0929 13:10:41.407752 1430964 ssh_runner.go:195] Run: containerd --version
	I0929 13:10:41.435044 1430964 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0929 13:10:41.436030 1430964 cli_runner.go:164] Run: docker network inspect no-preload-554589 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:10:41.453074 1430964 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0929 13:10:41.457289 1430964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:10:41.469647 1430964 kubeadm.go:875] updating cluster {Name:no-preload-554589 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-554589 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 13:10:41.469759 1430964 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0929 13:10:41.469801 1430964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 13:10:41.505893 1430964 containerd.go:627] all images are preloaded for containerd runtime.
	I0929 13:10:41.505917 1430964 cache_images.go:85] Images are preloaded, skipping loading
	I0929 13:10:41.505925 1430964 kubeadm.go:926] updating node { 192.168.94.2 8443 v1.34.0 containerd true true} ...
	I0929 13:10:41.506080 1430964 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-554589 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:no-preload-554589 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 13:10:41.506140 1430964 ssh_runner.go:195] Run: sudo crictl info
	I0929 13:10:41.542471 1430964 cni.go:84] Creating CNI manager for ""
	I0929 13:10:41.542493 1430964 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0929 13:10:41.542504 1430964 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 13:10:41.542530 1430964 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-554589 NodeName:no-preload-554589 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 13:10:41.542668 1430964 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-554589"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 13:10:41.542745 1430964 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 13:10:41.552925 1430964 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 13:10:41.553026 1430964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 13:10:41.562817 1430964 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0929 13:10:41.581742 1430964 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 13:10:41.600851 1430964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I0929 13:10:41.620107 1430964 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0929 13:10:41.623949 1430964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:10:41.636268 1430964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:10:41.709798 1430964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:10:41.732612 1430964 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589 for IP: 192.168.94.2
	I0929 13:10:41.732634 1430964 certs.go:194] generating shared ca certs ...
	I0929 13:10:41.732655 1430964 certs.go:226] acquiring lock for ca certs: {Name:mk80f04796163f71154dbe6468cabd937b3d9c9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:10:41.732829 1430964 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.key
	I0929 13:10:41.732882 1430964 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/proxy-client-ca.key
	I0929 13:10:41.732897 1430964 certs.go:256] generating profile certs ...
	I0929 13:10:41.733042 1430964 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589/client.key
	I0929 13:10:41.733119 1430964 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589/apiserver.key.98402d2c
	I0929 13:10:41.733170 1430964 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589/proxy-client.key
	I0929 13:10:41.733316 1430964 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/1101494.pem (1338 bytes)
	W0929 13:10:41.733355 1430964 certs.go:480] ignoring /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/1101494_empty.pem, impossibly tiny 0 bytes
	I0929 13:10:41.733367 1430964 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 13:10:41.733400 1430964 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem (1078 bytes)
	I0929 13:10:41.733427 1430964 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem (1123 bytes)
	I0929 13:10:41.733467 1430964 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/key.pem (1679 bytes)
	I0929 13:10:41.733519 1430964 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem (1708 bytes)
	I0929 13:10:41.734337 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 13:10:41.765009 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I0929 13:10:41.793504 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 13:10:41.827789 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 13:10:41.857035 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0929 13:10:41.884766 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 13:10:41.911756 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 13:10:41.941605 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 13:10:41.967516 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 13:10:41.992710 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/1101494.pem --> /usr/share/ca-certificates/1101494.pem (1338 bytes)
	I0929 13:10:42.018319 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem --> /usr/share/ca-certificates/11014942.pem (1708 bytes)
	I0929 13:10:42.042856 1430964 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 13:10:42.060923 1430964 ssh_runner.go:195] Run: openssl version
	I0929 13:10:42.066444 1430964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 13:10:42.076065 1430964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:10:42.079599 1430964 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 12:18 /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:10:42.079650 1430964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:10:42.086452 1430964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 13:10:42.095408 1430964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1101494.pem && ln -fs /usr/share/ca-certificates/1101494.pem /etc/ssl/certs/1101494.pem"
	I0929 13:10:42.105262 1430964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1101494.pem
	I0929 13:10:42.108926 1430964 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 12:23 /usr/share/ca-certificates/1101494.pem
	I0929 13:10:42.108999 1430964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1101494.pem
	I0929 13:10:42.115656 1430964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1101494.pem /etc/ssl/certs/51391683.0"
	I0929 13:10:42.124799 1430964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11014942.pem && ln -fs /usr/share/ca-certificates/11014942.pem /etc/ssl/certs/11014942.pem"
	I0929 13:10:42.134401 1430964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11014942.pem
	I0929 13:10:42.137842 1430964 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 12:23 /usr/share/ca-certificates/11014942.pem
	I0929 13:10:42.137890 1430964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11014942.pem
	I0929 13:10:42.145059 1430964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11014942.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 13:10:42.154717 1430964 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 13:10:42.158651 1430964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 13:10:42.165748 1430964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 13:10:42.172341 1430964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 13:10:42.178784 1430964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 13:10:42.185439 1430964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 13:10:42.192086 1430964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 13:10:42.198506 1430964 kubeadm.go:392] StartCluster: {Name:no-preload-554589 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-554589 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:10:42.198617 1430964 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0929 13:10:42.198653 1430964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 13:10:42.235388 1430964 cri.go:89] found id: "3aa4e89ae916232c207fa3b1b9f357dad149bbb0d0a5d1cd2b42c27cad6374b2"
	I0929 13:10:42.235408 1430964 cri.go:89] found id: "21b59ec52c2f189cce4c1c71122fb539bab5404609e8d49bc9bc242623c98f2d"
	I0929 13:10:42.235417 1430964 cri.go:89] found id: "fe92b189cf883cbe93d9474127d870f453d75c020b22114de99123f9f623f3a1"
	I0929 13:10:42.235421 1430964 cri.go:89] found id: "86d180d3fafecd80e755e727e2f50ad02bd1ea0707d33e41b1e2c298740f82b2"
	I0929 13:10:42.235426 1430964 cri.go:89] found id: "f157b54ee5632361a5614f30127b6f5dfc89ff0daa05de53a9f5257c9ebec23a"
	I0929 13:10:42.235429 1430964 cri.go:89] found id: "8c5c1254cf9381b1212b778b0bea8cccf2cd1cd3a2b9653e31070bc574cbe9d7"
	I0929 13:10:42.235431 1430964 cri.go:89] found id: "448fabba6fe89ac66791993182ef471d034e865da39b82ac763c5f6f70777c96"
	I0929 13:10:42.235434 1430964 cri.go:89] found id: "3e59ee92e127e9ebe23e71830eaec1c6942debeff812ea825dca6bd1ca6af1b8"
	I0929 13:10:42.235436 1430964 cri.go:89] found id: ""
	I0929 13:10:42.235495 1430964 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0929 13:10:42.250871 1430964 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-29T13:10:42Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0929 13:10:42.250953 1430964 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 13:10:42.263482 1430964 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 13:10:42.263515 1430964 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 13:10:42.263568 1430964 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 13:10:42.276428 1430964 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 13:10:42.277682 1430964 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-554589" does not appear in /home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 13:10:42.278693 1430964 kubeconfig.go:62] /home/jenkins/minikube-integration/21652-1097891/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-554589" cluster setting kubeconfig missing "no-preload-554589" context setting]
	I0929 13:10:42.280300 1430964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/kubeconfig: {Name:mk343611c88fd6ad36810bb377f9a0ca463784db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:10:42.282772 1430964 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 13:10:42.295661 1430964 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.94.2
	I0929 13:10:42.295700 1430964 kubeadm.go:593] duration metric: took 32.178175ms to restartPrimaryControlPlane
	I0929 13:10:42.295712 1430964 kubeadm.go:394] duration metric: took 97.214108ms to StartCluster
	I0929 13:10:42.295732 1430964 settings.go:142] acquiring lock: {Name:mk967ab7b412f5ea13a8bdbc3d08e00d0ec4417f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:10:42.295792 1430964 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 13:10:42.298396 1430964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/kubeconfig: {Name:mk343611c88fd6ad36810bb377f9a0ca463784db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:10:42.298619 1430964 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0929 13:10:42.298702 1430964 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 13:10:42.298790 1430964 addons.go:69] Setting storage-provisioner=true in profile "no-preload-554589"
	I0929 13:10:42.298805 1430964 addons.go:238] Setting addon storage-provisioner=true in "no-preload-554589"
	W0929 13:10:42.298811 1430964 addons.go:247] addon storage-provisioner should already be in state true
	I0929 13:10:42.298837 1430964 host.go:66] Checking if "no-preload-554589" exists ...
	I0929 13:10:42.298829 1430964 addons.go:69] Setting default-storageclass=true in profile "no-preload-554589"
	I0929 13:10:42.298848 1430964 config.go:182] Loaded profile config "no-preload-554589": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 13:10:42.298857 1430964 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-554589"
	I0929 13:10:42.298870 1430964 addons.go:69] Setting dashboard=true in profile "no-preload-554589"
	I0929 13:10:42.298841 1430964 addons.go:69] Setting metrics-server=true in profile "no-preload-554589"
	I0929 13:10:42.298896 1430964 addons.go:238] Setting addon dashboard=true in "no-preload-554589"
	W0929 13:10:42.298906 1430964 addons.go:247] addon dashboard should already be in state true
	I0929 13:10:42.298908 1430964 addons.go:238] Setting addon metrics-server=true in "no-preload-554589"
	W0929 13:10:42.298917 1430964 addons.go:247] addon metrics-server should already be in state true
	I0929 13:10:42.298940 1430964 host.go:66] Checking if "no-preload-554589" exists ...
	I0929 13:10:42.298942 1430964 host.go:66] Checking if "no-preload-554589" exists ...
	I0929 13:10:42.299211 1430964 cli_runner.go:164] Run: docker container inspect no-preload-554589 --format={{.State.Status}}
	I0929 13:10:42.299337 1430964 cli_runner.go:164] Run: docker container inspect no-preload-554589 --format={{.State.Status}}
	I0929 13:10:42.299397 1430964 cli_runner.go:164] Run: docker container inspect no-preload-554589 --format={{.State.Status}}
	I0929 13:10:42.299410 1430964 cli_runner.go:164] Run: docker container inspect no-preload-554589 --format={{.State.Status}}
	I0929 13:10:42.301050 1430964 out.go:179] * Verifying Kubernetes components...
	I0929 13:10:42.305464 1430964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:10:42.327596 1430964 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 13:10:42.327632 1430964 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 13:10:42.329217 1430964 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 13:10:42.329249 1430964 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 13:10:42.329324 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:42.329326 1430964 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 13:10:42.329804 1430964 addons.go:238] Setting addon default-storageclass=true in "no-preload-554589"
	W0929 13:10:42.329826 1430964 addons.go:247] addon default-storageclass should already be in state true
	I0929 13:10:42.329858 1430964 host.go:66] Checking if "no-preload-554589" exists ...
	I0929 13:10:42.330382 1430964 cli_runner.go:164] Run: docker container inspect no-preload-554589 --format={{.State.Status}}
	I0929 13:10:42.330802 1430964 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 13:10:42.330820 1430964 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 13:10:42.330878 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:42.332253 1430964 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 13:10:42.333200 1430964 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:10:42.333216 1430964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 13:10:42.333276 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:42.358580 1430964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33611 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/no-preload-554589/id_rsa Username:docker}
	I0929 13:10:42.361394 1430964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33611 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/no-preload-554589/id_rsa Username:docker}
	I0929 13:10:42.365289 1430964 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 13:10:42.366065 1430964 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 13:10:42.366168 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:42.369057 1430964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33611 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/no-preload-554589/id_rsa Username:docker}
	I0929 13:10:42.398458 1430964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33611 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/no-preload-554589/id_rsa Username:docker}
	I0929 13:10:42.465331 1430964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:10:42.485873 1430964 node_ready.go:35] waiting up to 6m0s for node "no-preload-554589" to be "Ready" ...
	I0929 13:10:42.502155 1430964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:10:42.502885 1430964 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 13:10:42.502905 1430964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 13:10:42.517069 1430964 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 13:10:42.517097 1430964 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 13:10:42.522944 1430964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 13:10:42.538079 1430964 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 13:10:42.538106 1430964 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 13:10:42.545200 1430964 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 13:10:42.545228 1430964 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 13:10:42.570649 1430964 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 13:10:42.570677 1430964 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 13:10:42.580495 1430964 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 13:10:42.580521 1430964 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 13:10:42.609253 1430964 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 13:10:42.609285 1430964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0929 13:10:42.609512 1430964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0929 13:10:42.611191 1430964 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 13:10:42.611286 1430964 retry.go:31] will retry after 216.136192ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 13:10:42.634003 1430964 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 13:10:42.634107 1430964 retry.go:31] will retry after 293.519359ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 13:10:42.643987 1430964 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 13:10:42.644016 1430964 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0929 13:10:42.674561 1430964 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 13:10:42.674595 1430964 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0929 13:10:42.702843 1430964 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 13:10:42.702873 1430964 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 13:10:42.728750 1430964 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 13:10:42.728781 1430964 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 13:10:42.753082 1430964 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:10:42.753106 1430964 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 13:10:42.772698 1430964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:10:42.827939 1430964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:10:42.928554 1430964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0929 13:10:44.343592 1430964 node_ready.go:49] node "no-preload-554589" is "Ready"
	I0929 13:10:44.343632 1430964 node_ready.go:38] duration metric: took 1.857723898s for node "no-preload-554589" to be "Ready" ...
	I0929 13:10:44.343652 1430964 api_server.go:52] waiting for apiserver process to appear ...
	I0929 13:10:44.343710 1430964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 13:10:44.905717 1430964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.296158013s)
	I0929 13:10:44.905761 1430964 addons.go:479] Verifying addon metrics-server=true in "no-preload-554589"
	I0929 13:10:44.905844 1430964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.133099697s)
	I0929 13:10:44.907337 1430964 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-554589 addons enable metrics-server
	
	I0929 13:10:44.924947 1430964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.096962675s)
	I0929 13:10:44.925023 1430964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.996439442s)
	I0929 13:10:44.925049 1430964 api_server.go:72] duration metric: took 2.626402587s to wait for apiserver process to appear ...
	I0929 13:10:44.925058 1430964 api_server.go:88] waiting for apiserver healthz status ...
	I0929 13:10:44.925078 1430964 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0929 13:10:44.931266 1430964 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:10:44.931296 1430964 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:10:44.932611 1430964 out.go:179] * Enabled addons: metrics-server, dashboard, storage-provisioner, default-storageclass
	I0929 13:10:44.935452 1430964 addons.go:514] duration metric: took 2.636765019s for enable addons: enabled=[metrics-server dashboard storage-provisioner default-storageclass]
	I0929 13:10:45.426011 1430964 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0929 13:10:45.431277 1430964 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:10:45.431304 1430964 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:10:45.925804 1430964 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0929 13:10:45.931188 1430964 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:10:45.931222 1430964 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:10:46.425589 1430964 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0929 13:10:46.429986 1430964 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:10:46.430025 1430964 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:10:46.925637 1430964 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0929 13:10:46.929914 1430964 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0929 13:10:46.931143 1430964 api_server.go:141] control plane version: v1.34.0
	I0929 13:10:46.931168 1430964 api_server.go:131] duration metric: took 2.006103154s to wait for apiserver health ...
	I0929 13:10:46.931177 1430964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 13:10:46.934948 1430964 system_pods.go:59] 9 kube-system pods found
	I0929 13:10:46.935007 1430964 system_pods.go:61] "coredns-66bc5c9577-6cxff" [0ec3329b-47fd-402f-b8ec-d482d1f9b3c7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:10:46.935024 1430964 system_pods.go:61] "etcd-no-preload-554589" [6ae6f226-f3f5-4916-86ac-241f71542eec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:10:46.935032 1430964 system_pods.go:61] "kindnet-5z49c" [b688a8a1-9c75-42a1-be5a-48aff9897101] Running
	I0929 13:10:46.935040 1430964 system_pods.go:61] "kube-apiserver-no-preload-554589" [461eeb18-0997-4f04-b2f2-bd4f93ae16bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:10:46.935048 1430964 system_pods.go:61] "kube-controller-manager-no-preload-554589" [0095f296-2792-42a7-a015-f92d570fe2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:10:46.935052 1430964 system_pods.go:61] "kube-proxy-8kkxk" [0e984503-4cab-4fcf-a1cb-1684d2247f43] Running
	I0929 13:10:46.935064 1430964 system_pods.go:61] "kube-scheduler-no-preload-554589" [e47072a4-1f75-434b-aa66-477204025b6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:10:46.935071 1430964 system_pods.go:61] "metrics-server-746fcd58dc-45phl" [638c53d3-4825-4387-bb3a-56dd0be70464] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 13:10:46.935075 1430964 system_pods.go:61] "storage-provisioner" [af1e37d9-c313-4db3-a626-81403cf9ad15] Running
	I0929 13:10:46.935084 1430964 system_pods.go:74] duration metric: took 3.897674ms to wait for pod list to return data ...
	I0929 13:10:46.935098 1430964 default_sa.go:34] waiting for default service account to be created ...
	I0929 13:10:46.937529 1430964 default_sa.go:45] found service account: "default"
	I0929 13:10:46.937550 1430964 default_sa.go:55] duration metric: took 2.442128ms for default service account to be created ...
	I0929 13:10:46.937558 1430964 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 13:10:46.940321 1430964 system_pods.go:86] 9 kube-system pods found
	I0929 13:10:46.940347 1430964 system_pods.go:89] "coredns-66bc5c9577-6cxff" [0ec3329b-47fd-402f-b8ec-d482d1f9b3c7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:10:46.940355 1430964 system_pods.go:89] "etcd-no-preload-554589" [6ae6f226-f3f5-4916-86ac-241f71542eec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:10:46.940361 1430964 system_pods.go:89] "kindnet-5z49c" [b688a8a1-9c75-42a1-be5a-48aff9897101] Running
	I0929 13:10:46.940368 1430964 system_pods.go:89] "kube-apiserver-no-preload-554589" [461eeb18-0997-4f04-b2f2-bd4f93ae16bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:10:46.940375 1430964 system_pods.go:89] "kube-controller-manager-no-preload-554589" [0095f296-2792-42a7-a015-f92d570fe2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:10:46.940388 1430964 system_pods.go:89] "kube-proxy-8kkxk" [0e984503-4cab-4fcf-a1cb-1684d2247f43] Running
	I0929 13:10:46.940399 1430964 system_pods.go:89] "kube-scheduler-no-preload-554589" [e47072a4-1f75-434b-aa66-477204025b6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:10:46.940412 1430964 system_pods.go:89] "metrics-server-746fcd58dc-45phl" [638c53d3-4825-4387-bb3a-56dd0be70464] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 13:10:46.940419 1430964 system_pods.go:89] "storage-provisioner" [af1e37d9-c313-4db3-a626-81403cf9ad15] Running
	I0929 13:10:46.940427 1430964 system_pods.go:126] duration metric: took 2.863046ms to wait for k8s-apps to be running ...
	I0929 13:10:46.940441 1430964 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 13:10:46.940488 1430964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 13:10:46.954207 1430964 system_svc.go:56] duration metric: took 13.760371ms WaitForService to wait for kubelet
	I0929 13:10:46.954239 1430964 kubeadm.go:578] duration metric: took 4.655591833s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:10:46.954275 1430964 node_conditions.go:102] verifying NodePressure condition ...
	I0929 13:10:46.957433 1430964 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 13:10:46.957457 1430964 node_conditions.go:123] node cpu capacity is 8
	I0929 13:10:46.957468 1430964 node_conditions.go:105] duration metric: took 3.188601ms to run NodePressure ...
	I0929 13:10:46.957482 1430964 start.go:241] waiting for startup goroutines ...
	I0929 13:10:46.957491 1430964 start.go:246] waiting for cluster config update ...
	I0929 13:10:46.957507 1430964 start.go:255] writing updated cluster config ...
	I0929 13:10:46.957779 1430964 ssh_runner.go:195] Run: rm -f paused
	I0929 13:10:46.961696 1430964 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:10:46.965332 1430964 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6cxff" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 13:10:48.970466 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	I0929 13:10:50.103007 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:10:50.103057 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:10:50.103075 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:10:50.103091 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:10:50.103100 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:10:50.103107 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:10:50.103115 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:10:50.103122 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:10:50.103130 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:10:50.103135 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:10:50.103156 1359411 retry.go:31] will retry after 45.832047842s: missing components: kube-dns
	W0929 13:10:50.971068 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:10:52.971656 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:10:55.470753 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:10:57.970580 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:10:59.970813 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:11:01.970891 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:11:04.471085 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:11:06.971048 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:11:09.470881 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:11:11.471210 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:11:13.970282 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:11:15.971635 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:11:18.470862 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:11:20.471131 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	I0929 13:11:21.970955 1430964 pod_ready.go:94] pod "coredns-66bc5c9577-6cxff" is "Ready"
	I0929 13:11:21.971016 1430964 pod_ready.go:86] duration metric: took 35.005660476s for pod "coredns-66bc5c9577-6cxff" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:21.973497 1430964 pod_ready.go:83] waiting for pod "etcd-no-preload-554589" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:21.976953 1430964 pod_ready.go:94] pod "etcd-no-preload-554589" is "Ready"
	I0929 13:11:21.977006 1430964 pod_ready.go:86] duration metric: took 3.479297ms for pod "etcd-no-preload-554589" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:21.978873 1430964 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-554589" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:21.982431 1430964 pod_ready.go:94] pod "kube-apiserver-no-preload-554589" is "Ready"
	I0929 13:11:21.982453 1430964 pod_ready.go:86] duration metric: took 3.560274ms for pod "kube-apiserver-no-preload-554589" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:21.984284 1430964 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-554589" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:22.168713 1430964 pod_ready.go:94] pod "kube-controller-manager-no-preload-554589" is "Ready"
	I0929 13:11:22.168740 1430964 pod_ready.go:86] duration metric: took 184.436823ms for pod "kube-controller-manager-no-preload-554589" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:22.369803 1430964 pod_ready.go:83] waiting for pod "kube-proxy-8kkxk" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:22.769375 1430964 pod_ready.go:94] pod "kube-proxy-8kkxk" is "Ready"
	I0929 13:11:22.769412 1430964 pod_ready.go:86] duration metric: took 399.578121ms for pod "kube-proxy-8kkxk" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:22.969424 1430964 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-554589" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:23.369302 1430964 pod_ready.go:94] pod "kube-scheduler-no-preload-554589" is "Ready"
	I0929 13:11:23.369329 1430964 pod_ready.go:86] duration metric: took 399.880622ms for pod "kube-scheduler-no-preload-554589" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:23.369339 1430964 pod_ready.go:40] duration metric: took 36.407610233s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:11:23.415562 1430964 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 13:11:23.417576 1430964 out.go:179] * Done! kubectl is now configured to use "no-preload-554589" cluster and "default" namespace by default
	I0929 13:11:35.940622 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:11:35.940666 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:11:35.940679 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:11:35.940691 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:11:35.940697 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:11:35.940703 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:11:35.940709 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:11:35.940717 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:11:35.940726 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:11:35.940732 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:11:35.940756 1359411 retry.go:31] will retry after 45.593833894s: missing components: kube-dns
	I0929 13:12:21.540022 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:12:21.540068 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:12:21.540078 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:12:21.540088 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:12:21.540096 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:12:21.540102 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:12:21.540108 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:12:21.540112 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:12:21.540117 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:12:21.540120 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:12:21.540139 1359411 retry.go:31] will retry after 1m5.22199495s: missing components: kube-dns
	I0929 13:13:26.769357 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:13:26.769402 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:13:26.769415 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:13:26.769424 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:13:26.769428 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:13:26.769432 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:13:26.769438 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:13:26.769442 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:13:26.769446 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:13:26.769449 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:13:26.769467 1359411 retry.go:31] will retry after 1m13.959390534s: missing components: kube-dns
	I0929 13:14:40.733869 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:14:40.733915 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:14:40.733926 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:14:40.733934 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:14:40.733937 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:14:40.733942 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:14:40.733946 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:14:40.733951 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:14:40.733954 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:14:40.733958 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:14:40.734002 1359411 retry.go:31] will retry after 1m13.688928173s: missing components: kube-dns
	I0929 13:15:54.426567 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:15:54.426609 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:15:54.426619 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:15:54.426627 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:15:54.426631 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:15:54.426635 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:15:54.426639 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:15:54.426644 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:15:54.426647 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:15:54.426650 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:15:54.426671 1359411 retry.go:31] will retry after 53.851303252s: missing components: kube-dns
	I0929 13:16:48.282415 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:16:48.282459 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:16:48.282471 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:16:48.282478 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:16:48.282481 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:16:48.282486 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:16:48.282489 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:16:48.282493 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:16:48.282496 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:16:48.282499 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:16:48.282516 1359411 retry.go:31] will retry after 1m1.774490631s: missing components: kube-dns
	I0929 13:17:50.062266 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:17:50.062389 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:17:50.062405 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:17:50.062415 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:17:50.062422 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:17:50.062432 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:17:50.062438 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:17:50.062447 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:17:50.062453 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:17:50.062460 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:17:50.062481 1359411 retry.go:31] will retry after 1m8.677032828s: missing components: kube-dns
	I0929 13:18:58.743760 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:18:58.743806 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:18:58.743817 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:18:58.743825 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:18:58.743832 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:18:58.743840 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:18:58.743846 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:18:58.743853 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:18:58.743858 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:18:58.743862 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:18:58.743885 1359411 retry.go:31] will retry after 1m8.264714311s: missing components: kube-dns
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	e3b2666b123b9       523cad1a4df73       3 minutes ago       Exited              dashboard-metrics-scraper   6                   ec6453a6e5b12       dashboard-metrics-scraper-5f989dc9cf-5f27f
	a038fdcf70d29       6e38f40d628db       8 minutes ago       Running             storage-provisioner         2                   619afdfc702d3       storage-provisioner
	07ebeb94d9201       409467f978b4a       9 minutes ago       Running             kindnet-cni                 1                   5862d4728adcf       kindnet-tjgv6
	fbd6fd44a800d       56cc512116c8f       9 minutes ago       Running             busybox                     1                   a2045c4c9d3f9       busybox
	751d011dd2b4f       6e38f40d628db       9 minutes ago       Exited              storage-provisioner         1                   619afdfc702d3       storage-provisioner
	9cb18fdc41964       ead0a4a53df89       9 minutes ago       Running             coredns                     1                   387d4f18250f3       coredns-5dd5756b68-lrkcg
	94774ca53ecc3       ea1030da44aa1       9 minutes ago       Running             kube-proxy                  1                   06e6ebaf5e3c4       kube-proxy-9nsk9
	409cbf88accf8       bb5e0dde9054c       9 minutes ago       Running             kube-apiserver              1                   31dbc9330bff1       kube-apiserver-old-k8s-version-495121
	8d916df956e81       f6f496300a2ae       9 minutes ago       Running             kube-scheduler              1                   b5ea8e2ae6437       kube-scheduler-old-k8s-version-495121
	b75d2434561ff       4be79c38a4bab       9 minutes ago       Running             kube-controller-manager     1                   b57d4294bc4b4       kube-controller-manager-old-k8s-version-495121
	693060a993d1f       73deb9a3f7025       9 minutes ago       Running             etcd                        1                   ffbcadea4319c       etcd-old-k8s-version-495121
	b94dcb6279ab7       56cc512116c8f       10 minutes ago      Exited              busybox                     0                   471bb9570b0a6       busybox
	2ab90d73c849e       ead0a4a53df89       10 minutes ago      Exited              coredns                     0                   31ac04d1de38d       coredns-5dd5756b68-lrkcg
	713c1f40cb688       409467f978b4a       10 minutes ago      Exited              kindnet-cni                 0                   f1f2ce685864b       kindnet-tjgv6
	e08df30dda564       ea1030da44aa1       10 minutes ago      Exited              kube-proxy                  0                   0e6af22aff9b5       kube-proxy-9nsk9
	93fa1c32e856e       4be79c38a4bab       10 minutes ago      Exited              kube-controller-manager     0                   731628c0d3955       kube-controller-manager-old-k8s-version-495121
	17aad6d9a070d       f6f496300a2ae       10 minutes ago      Exited              kube-scheduler              0                   7c4c96e49175b       kube-scheduler-old-k8s-version-495121
	ce444d8ceca3b       bb5e0dde9054c       10 minutes ago      Exited              kube-apiserver              0                   a9884e42f1299       kube-apiserver-old-k8s-version-495121
	a6912c13f2e79       73deb9a3f7025       10 minutes ago      Exited              etcd                        0                   1a68764da09b3       etcd-old-k8s-version-495121
	
	
	==> containerd <==
	Sep 29 13:12:48 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:12:48.184515529Z" level=info msg="RemoveContainer for \"e7e96be13bd3edbf4224a3275ca41eb9110419757a96d982f3e7b88c8f7395a7\" returns successfully"
	Sep 29 13:13:00 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:13:00.639105277Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 29 13:13:00 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:13:00.640710109Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 13:13:01 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:13:01.304510630Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 13:13:03 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:13:03.373246463Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 13:13:03 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:13:03.373320366Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Sep 29 13:14:59 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:14:59.639747946Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 29 13:14:59 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:14:59.700519618Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Sep 29 13:14:59 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:14:59.701815281Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 13:14:59 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:14:59.701867793Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 29 13:15:38 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:15:38.641048985Z" level=info msg="CreateContainer within sandbox \"ec6453a6e5b12ffed1e8ad5f07111a62816a635458bbe12ce045e40f1b07e3d0\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,}"
	Sep 29 13:15:38 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:15:38.650658748Z" level=info msg="CreateContainer within sandbox \"ec6453a6e5b12ffed1e8ad5f07111a62816a635458bbe12ce045e40f1b07e3d0\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,} returns container id \"e3b2666b123b9b0f81a7035bbb94e1aea9ad2b981195663623ef0bbe30cc8677\""
	Sep 29 13:15:38 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:15:38.651334275Z" level=info msg="StartContainer for \"e3b2666b123b9b0f81a7035bbb94e1aea9ad2b981195663623ef0bbe30cc8677\""
	Sep 29 13:15:38 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:15:38.712535169Z" level=info msg="StartContainer for \"e3b2666b123b9b0f81a7035bbb94e1aea9ad2b981195663623ef0bbe30cc8677\" returns successfully"
	Sep 29 13:15:38 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:15:38.725443496Z" level=info msg="received exit event container_id:\"e3b2666b123b9b0f81a7035bbb94e1aea9ad2b981195663623ef0bbe30cc8677\"  id:\"e3b2666b123b9b0f81a7035bbb94e1aea9ad2b981195663623ef0bbe30cc8677\"  pid:2767  exit_status:1  exited_at:{seconds:1759151738  nanos:725196285}"
	Sep 29 13:15:38 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:15:38.747223472Z" level=info msg="shim disconnected" id=e3b2666b123b9b0f81a7035bbb94e1aea9ad2b981195663623ef0bbe30cc8677 namespace=k8s.io
	Sep 29 13:15:38 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:15:38.747292721Z" level=warning msg="cleaning up after shim disconnected" id=e3b2666b123b9b0f81a7035bbb94e1aea9ad2b981195663623ef0bbe30cc8677 namespace=k8s.io
	Sep 29 13:15:38 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:15:38.747308293Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 29 13:15:39 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:15:39.562061948Z" level=info msg="RemoveContainer for \"cb221fa35340407531b0b36fd422ea9b1ce86b88fe831c41c0a11d4d3c225be9\""
	Sep 29 13:15:39 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:15:39.565894621Z" level=info msg="RemoveContainer for \"cb221fa35340407531b0b36fd422ea9b1ce86b88fe831c41c0a11d4d3c225be9\" returns successfully"
	Sep 29 13:15:45 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:15:45.639932705Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 29 13:15:45 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:15:45.641756628Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 13:15:46 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:15:46.292020354Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 13:15:48 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:15:48.155877774Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 13:15:48 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:15:48.156003083Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11014"
	
	
	==> coredns [2ab90d73c849e3b421e70c032ef293fdaac96e068a9e25b6496ff8474b906234] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:37065 - 11611 "HINFO IN 4379644100506618813.14631479693037293. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.033617954s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9cb18fdc419640ecd1f971b932e5008b1e25aba3a2c8082a6f7578eb631baae8] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59790 - 9195 "HINFO IN 5803060664264663451.3040212327738087743. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027895818s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-495121
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-495121
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=old-k8s-version-495121
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T13_08_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 13:08:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-495121
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 13:19:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 13:15:10 +0000   Mon, 29 Sep 2025 13:08:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 13:15:10 +0000   Mon, 29 Sep 2025 13:08:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 13:15:10 +0000   Mon, 29 Sep 2025 13:08:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 13:15:10 +0000   Mon, 29 Sep 2025 13:08:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-495121
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 060a6b1b9edf42629635d8c7738efe8f
	  System UUID:                3ad91b0c-af3b-4a9d-8939-9ef7555c85d9
	  Boot ID:                    c950b162-3ea4-4410-8c2e-1238f18b29b9
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-5dd5756b68-lrkcg                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-old-k8s-version-495121                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-tjgv6                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-old-k8s-version-495121             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-old-k8s-version-495121    200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-9nsk9                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-old-k8s-version-495121             100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-57f55c9bc5-t2mql                   100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         9m58s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-5f27f        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m23s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-r4kbj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 9m34s                  kube-proxy       
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node old-k8s-version-495121 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node old-k8s-version-495121 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node old-k8s-version-495121 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node old-k8s-version-495121 event: Registered Node old-k8s-version-495121 in Controller
	  Normal  Starting                 9m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m39s (x8 over 9m39s)  kubelet          Node old-k8s-version-495121 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m39s (x8 over 9m39s)  kubelet          Node old-k8s-version-495121 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m39s (x7 over 9m39s)  kubelet          Node old-k8s-version-495121 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m23s                  node-controller  Node old-k8s-version-495121 event: Registered Node old-k8s-version-495121 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 a1 f4 28 81 a8 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2e 2f bb 72 d0 bd 08 06
	[  +6.778142] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 83 71 a8 41 1d 08 06
	[  +0.096747] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 43 49 e5 fd fa 08 06
	[Sep29 13:07] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 2d 17 7b b6 88 08 06
	[  +0.000371] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5a 43 49 e5 fd fa 08 06
	[ +37.870699] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 36 61 5e 36 d0 11 08 06
	[Sep29 13:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 3c ea 5f b8 68 08 06
	[  +0.009082] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 a0 7d 1d f4 ea 08 06
	[ +10.861380] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 60 01 bb bd e5 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 61 5e 36 d0 11 08 06
	[ +36.402844] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 73 32 f4 f1 e6 08 06
	[  +0.000316] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 3c ea 5f b8 68 08 06
	
	
	==> etcd [693060a993d1fef0eb00234c496e6e3658bb2fd678e27409bfda5ead0bcfce1e] <==
	{"level":"info","ts":"2025-09-29T13:09:30.570379Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-09-29T13:09:30.569826Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-09-29T13:09:30.570818Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-09-29T13:09:30.571112Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T13:09:30.571295Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T13:09:30.572396Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-29T13:09:30.572566Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-29T13:09:30.572594Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-29T13:09:30.572823Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-09-29T13:09:30.57284Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-09-29T13:09:31.755203Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-29T13:09:31.755243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-29T13:09:31.755273Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-09-29T13:09:31.755293Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-09-29T13:09:31.755301Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-09-29T13:09:31.755317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-09-29T13:09:31.75533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-09-29T13:09:31.756209Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-495121 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-29T13:09:31.756277Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T13:09:31.756329Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T13:09:31.756437Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-29T13:09:31.756504Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-29T13:09:31.757409Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-29T13:09:31.75758Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-09-29T13:09:50.007936Z","caller":"traceutil/trace.go:171","msg":"trace[1118325892] transaction","detail":"{read_only:false; response_revision:637; number_of_response:1; }","duration":"158.422839ms","start":"2025-09-29T13:09:49.849495Z","end":"2025-09-29T13:09:50.007918Z","steps":["trace[1118325892] 'process raft request'  (duration: 85.206887ms)","trace[1118325892] 'compare'  (duration: 73.131049ms)"],"step_count":2}
	
	
	==> etcd [a6912c13f2e79c2ffb3a0b3f3bbb9acba46a19a6e9f51d9f98fcfda1050fa001] <==
	{"level":"info","ts":"2025-09-29T13:08:21.865712Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-09-29T13:08:21.865724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-09-29T13:08:21.865734Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-09-29T13:08:21.866654Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-495121 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-29T13:08:21.866881Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T13:08:21.86699Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T13:08:21.867007Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T13:08:21.867113Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-29T13:08:21.867151Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-29T13:08:21.868469Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-29T13:08:21.868474Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-09-29T13:08:21.869471Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T13:08:21.872353Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T13:08:21.87239Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T13:08:37.323352Z","caller":"traceutil/trace.go:171","msg":"trace[959236800] linearizableReadLoop","detail":"{readStateIndex:293; appliedIndex:292; }","duration":"103.454638ms","start":"2025-09-29T13:08:37.219877Z","end":"2025-09-29T13:08:37.323332Z","steps":["trace[959236800] 'read index received'  (duration: 103.309711ms)","trace[959236800] 'applied index is now lower than readState.Index'  (duration: 144.255µs)"],"step_count":2}
	{"level":"info","ts":"2025-09-29T13:08:37.323448Z","caller":"traceutil/trace.go:171","msg":"trace[620451919] transaction","detail":"{read_only:false; response_revision:281; number_of_response:1; }","duration":"121.986176ms","start":"2025-09-29T13:08:37.201438Z","end":"2025-09-29T13:08:37.323424Z","steps":["trace[620451919] 'process raft request'  (duration: 121.789257ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T13:08:37.323625Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.704449ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T13:08:37.323683Z","caller":"traceutil/trace.go:171","msg":"trace[85859342] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:281; }","duration":"103.823279ms","start":"2025-09-29T13:08:37.219839Z","end":"2025-09-29T13:08:37.323662Z","steps":["trace[85859342] 'agreement among raft nodes before linearized reading'  (duration: 103.60334ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T13:08:38.407635Z","caller":"traceutil/trace.go:171","msg":"trace[933570630] linearizableReadLoop","detail":"{readStateIndex:297; appliedIndex:296; }","duration":"187.263359ms","start":"2025-09-29T13:08:38.220358Z","end":"2025-09-29T13:08:38.407621Z","steps":["trace[933570630] 'read index received'  (duration: 187.121481ms)","trace[933570630] 'applied index is now lower than readState.Index'  (duration: 141.579µs)"],"step_count":2}
	{"level":"info","ts":"2025-09-29T13:08:38.407678Z","caller":"traceutil/trace.go:171","msg":"trace[1940461009] transaction","detail":"{read_only:false; response_revision:285; number_of_response:1; }","duration":"190.402332ms","start":"2025-09-29T13:08:38.217254Z","end":"2025-09-29T13:08:38.407656Z","steps":["trace[1940461009] 'process raft request'  (duration: 190.257703ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T13:08:38.407799Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.443602ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T13:08:38.407838Z","caller":"traceutil/trace.go:171","msg":"trace[608446229] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:285; }","duration":"187.499358ms","start":"2025-09-29T13:08:38.220329Z","end":"2025-09-29T13:08:38.407829Z","steps":["trace[608446229] 'agreement among raft nodes before linearized reading'  (duration: 187.363643ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T13:08:38.812102Z","caller":"traceutil/trace.go:171","msg":"trace[1712206487] transaction","detail":"{read_only:false; response_revision:288; number_of_response:1; }","duration":"160.789593ms","start":"2025-09-29T13:08:38.651278Z","end":"2025-09-29T13:08:38.812067Z","steps":["trace[1712206487] 'process raft request'  (duration: 89.251367ms)","trace[1712206487] 'compare'  (duration: 71.239129ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T13:08:39.05647Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.835854ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-09-29T13:08:39.056542Z","caller":"traceutil/trace.go:171","msg":"trace[1242313453] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:289; }","duration":"105.932451ms","start":"2025-09-29T13:08:38.950595Z","end":"2025-09-29T13:08:39.056528Z","steps":["trace[1242313453] 'range keys from in-memory index tree'  (duration: 105.739359ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:19:08 up  6:01,  0 users,  load average: 0.26, 0.79, 1.52
	Linux old-k8s-version-495121 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [07ebeb94d9201d6034e8db286dd4633125bfa4b328ce93168b6d0df2a5dc09f2] <==
	I0929 13:17:05.117579       1 main.go:301] handling current node
	I0929 13:17:15.117332       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:17:15.117365       1 main.go:301] handling current node
	I0929 13:17:25.109085       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:17:25.109118       1 main.go:301] handling current node
	I0929 13:17:35.111856       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:17:35.111887       1 main.go:301] handling current node
	I0929 13:17:45.110227       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:17:45.110270       1 main.go:301] handling current node
	I0929 13:17:55.109030       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:17:55.109069       1 main.go:301] handling current node
	I0929 13:18:05.112602       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:18:05.112639       1 main.go:301] handling current node
	I0929 13:18:15.117735       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:18:15.117764       1 main.go:301] handling current node
	I0929 13:18:25.117828       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:18:25.117866       1 main.go:301] handling current node
	I0929 13:18:35.117129       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:18:35.117157       1 main.go:301] handling current node
	I0929 13:18:45.117381       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:18:45.117414       1 main.go:301] handling current node
	I0929 13:18:55.108461       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:18:55.108500       1 main.go:301] handling current node
	I0929 13:19:05.112557       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:19:05.112603       1 main.go:301] handling current node
	
	
	==> kindnet [713c1f40cb6884c94b00822e0e97263626264336dcaac97047f555996036cff6] <==
	I0929 13:08:44.762790       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0929 13:08:44.763123       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0929 13:08:44.763310       1 main.go:148] setting mtu 1500 for CNI 
	I0929 13:08:44.763327       1 main.go:178] kindnetd IP family: "ipv4"
	I0929 13:08:44.763349       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-29T13:08:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0929 13:08:44.963750       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0929 13:08:44.963795       1 controller.go:381] "Waiting for informer caches to sync"
	I0929 13:08:44.963809       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0929 13:08:45.149140       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0929 13:08:45.349204       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0929 13:08:45.349238       1 metrics.go:72] Registering metrics
	I0929 13:08:45.349329       1 controller.go:711] "Syncing nftables rules"
	I0929 13:08:54.970059       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:08:54.970105       1 main.go:301] handling current node
	I0929 13:09:04.964036       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:09:04.964087       1 main.go:301] handling current node
	
	
	==> kube-apiserver [409cbf88accf80c713654c9bd05bebd979bd2fb817752533f6109a85c1272d91] <==
	E0929 13:14:33.961158       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0929 13:14:33.962266       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:15:32.862012       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.108.57.16:443: connect: connection refused
	I0929 13:15:32.862041       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0929 13:15:33.961350       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 13:15:33.961381       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0929 13:15:33.961388       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 13:15:33.962506       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 13:15:33.962569       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0929 13:15:33.962580       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:16:32.860790       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.108.57.16:443: connect: connection refused
	I0929 13:16:32.860813       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0929 13:17:32.860882       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.108.57.16:443: connect: connection refused
	I0929 13:17:32.860907       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0929 13:17:33.962334       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 13:17:33.962371       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0929 13:17:33.962378       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 13:17:33.963432       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 13:17:33.963498       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0929 13:17:33.963510       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:18:32.861314       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.108.57.16:443: connect: connection refused
	I0929 13:18:32.861344       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-apiserver [ce444d8ceca3be0406177949fc0a18555192824013e62d576c73df73c7e3426d] <==
	I0929 13:08:24.944113       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0929 13:08:25.490054       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0929 13:08:26.592748       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0929 13:08:26.605185       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0929 13:08:26.615055       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0929 13:08:40.007747       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0929 13:08:40.408561       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	W0929 13:09:10.099906       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 13:09:10.100007       1 controller.go:135] adding "v1beta1.metrics.k8s.io" to AggregationController failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0929 13:09:10.100320       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 service unavailable
	I0929 13:09:10.100347       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0929 13:09:10.106812       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 13:09:10.106920       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0929 13:09:10.107001       1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0929 13:09:10.107035       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 service unavailable
	I0929 13:09:10.107049       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0929 13:09:10.189775       1 alloc.go:330] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.108.57.16"}
	W0929 13:09:10.195135       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 13:09:10.195309       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0929 13:09:10.206313       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 13:09:10.206388       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	
	
	==> kube-controller-manager [93fa1c32e856e877fc3536e7e9322026ec081194ee396a0ca36f2bb324e84e70] <==
	I0929 13:08:40.432735       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9nsk9"
	I0929 13:08:40.435597       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-tjgv6"
	I0929 13:08:40.569008       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-lrkcg"
	I0929 13:08:40.582074       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-r6nct"
	I0929 13:08:40.594310       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="582.031636ms"
	I0929 13:08:40.608794       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.413785ms"
	I0929 13:08:40.609163       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="206.997µs"
	I0929 13:08:40.617249       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="138.581µs"
	I0929 13:08:40.716594       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0929 13:08:40.724664       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-r6nct"
	I0929 13:08:40.730531       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.155928ms"
	I0929 13:08:40.736467       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="5.858703ms"
	I0929 13:08:40.736578       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="60.062µs"
	I0929 13:08:42.752078       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="1.170743ms"
	I0929 13:08:42.759806       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="131.309µs"
	I0929 13:08:42.761460       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="87.349µs"
	I0929 13:08:56.764571       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="166.733µs"
	I0929 13:08:56.788431       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.057469ms"
	I0929 13:08:56.788570       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.87µs"
	I0929 13:09:10.124733       1 event.go:307] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-57f55c9bc5 to 1"
	I0929 13:09:10.132802       1 event.go:307] "Event occurred" object="kube-system/metrics-server-57f55c9bc5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-57f55c9bc5-t2mql"
	I0929 13:09:10.140882       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="17.197814ms"
	I0929 13:09:10.149887       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="8.712376ms"
	I0929 13:09:10.167645       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="17.584587ms"
	I0929 13:09:10.167743       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="58.543µs"
	
	
	==> kube-controller-manager [b75d2434561ffecb03a963adac8a285eab3878a2ad25c7a9c81b1fbf6cdef6e7] <==
	I0929 13:14:15.819841       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 13:14:45.459088       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:14:45.827076       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 13:15:11.649755       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="154.53µs"
	E0929 13:15:15.463209       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:15:15.834053       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 13:15:24.650704       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="140.502µs"
	I0929 13:15:39.570865       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="111.934µs"
	E0929 13:15:45.468273       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:15:45.693003       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="124.931µs"
	I0929 13:15:45.840791       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 13:16:02.649133       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="127.372µs"
	E0929 13:16:15.473134       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:16:15.649094       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="142.902µs"
	I0929 13:16:15.847861       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 13:16:45.477893       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:16:45.854712       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 13:17:15.482070       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:17:15.861450       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 13:17:45.486739       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:17:45.868431       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 13:18:15.491226       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:18:15.878899       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 13:18:45.495507       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:18:45.886108       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [94774ca53ecc33a65be8ebcbbf981446c867be0a265707362ea3cac13b0a4e18] <==
	I0929 13:09:34.428841       1 server_others.go:69] "Using iptables proxy"
	I0929 13:09:34.443187       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I0929 13:09:34.479764       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 13:09:34.482643       1 server_others.go:152] "Using iptables Proxier"
	I0929 13:09:34.482725       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0929 13:09:34.482746       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0929 13:09:34.482783       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0929 13:09:34.483238       1 server.go:846] "Version info" version="v1.28.0"
	I0929 13:09:34.483321       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:09:34.483935       1 config.go:97] "Starting endpoint slice config controller"
	I0929 13:09:34.484005       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0929 13:09:34.484024       1 config.go:315] "Starting node config controller"
	I0929 13:09:34.484086       1 config.go:188] "Starting service config controller"
	I0929 13:09:34.484129       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0929 13:09:34.484044       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0929 13:09:34.584441       1 shared_informer.go:318] Caches are synced for service config
	I0929 13:09:34.584546       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0929 13:09:34.584980       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [e08df30dda564a006470ec044cfc6afc64a3beea8f1fa3c3115860dc7abcc524] <==
	I0929 13:08:41.028011       1 server_others.go:69] "Using iptables proxy"
	I0929 13:08:41.037778       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I0929 13:08:41.060348       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 13:08:41.063141       1 server_others.go:152] "Using iptables Proxier"
	I0929 13:08:41.063187       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0929 13:08:41.063196       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0929 13:08:41.063241       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0929 13:08:41.063476       1 server.go:846] "Version info" version="v1.28.0"
	I0929 13:08:41.063491       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:08:41.064165       1 config.go:97] "Starting endpoint slice config controller"
	I0929 13:08:41.064255       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0929 13:08:41.064295       1 config.go:188] "Starting service config controller"
	I0929 13:08:41.064326       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0929 13:08:41.065673       1 config.go:315] "Starting node config controller"
	I0929 13:08:41.065698       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0929 13:08:41.164574       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0929 13:08:41.165794       1 shared_informer.go:318] Caches are synced for node config
	I0929 13:08:41.165821       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [17aad6d9a070d7152eb3c4cde533105a2245fe898116322557bcf9e72f1c9c09] <==
	E0929 13:08:23.485452       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0929 13:08:23.485475       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0929 13:08:23.485491       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0929 13:08:23.485241       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0929 13:08:23.485558       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0929 13:08:23.485175       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0929 13:08:23.485587       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0929 13:08:23.485523       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0929 13:08:23.485611       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0929 13:08:23.485627       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0929 13:08:23.485533       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0929 13:08:23.485653       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0929 13:08:24.306272       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0929 13:08:24.306307       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0929 13:08:24.415289       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0929 13:08:24.415334       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0929 13:08:24.457854       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0929 13:08:24.457899       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 13:08:24.526173       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0929 13:08:24.526213       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0929 13:08:24.606272       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0929 13:08:24.606310       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0929 13:08:24.635937       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0929 13:08:24.636013       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0929 13:08:26.482542       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [8d916df956e81739bced98cd9c6200c78b40ecdf9581e3dd987ac8c0f0d40caa] <==
	I0929 13:09:30.863173       1 serving.go:348] Generated self-signed cert in-memory
	W0929 13:09:32.962668       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 13:09:32.962804       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W0929 13:09:32.962864       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 13:09:32.962900       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 13:09:32.990773       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I0929 13:09:32.990865       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:09:32.992728       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:09:32.992784       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0929 13:09:32.993803       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0929 13:09:32.993899       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0929 13:09:33.093190       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 29 13:17:55 old-k8s-version-495121 kubelet[611]: I0929 13:17:55.638947     611 scope.go:117] "RemoveContainer" containerID="e3b2666b123b9b0f81a7035bbb94e1aea9ad2b981195663623ef0bbe30cc8677"
	Sep 29 13:17:55 old-k8s-version-495121 kubelet[611]: E0929 13:17:55.639232     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-5f27f_kubernetes-dashboard(7eaf1de6-b6e8-45f6-a547-6633ab9ee5a4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5f27f" podUID="7eaf1de6-b6e8-45f6-a547-6633ab9ee5a4"
	Sep 29 13:18:01 old-k8s-version-495121 kubelet[611]: E0929 13:18:01.639411     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-t2mql" podUID="993269b1-5535-4e85-a7c1-3f23bd738880"
	Sep 29 13:18:08 old-k8s-version-495121 kubelet[611]: I0929 13:18:08.638669     611 scope.go:117] "RemoveContainer" containerID="e3b2666b123b9b0f81a7035bbb94e1aea9ad2b981195663623ef0bbe30cc8677"
	Sep 29 13:18:08 old-k8s-version-495121 kubelet[611]: E0929 13:18:08.639046     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-5f27f_kubernetes-dashboard(7eaf1de6-b6e8-45f6-a547-6633ab9ee5a4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5f27f" podUID="7eaf1de6-b6e8-45f6-a547-6633ab9ee5a4"
	Sep 29 13:18:08 old-k8s-version-495121 kubelet[611]: E0929 13:18:08.639328     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-r4kbj" podUID="60e7b4a4-451c-4c51-ae68-8a626ab1e1a7"
	Sep 29 13:18:15 old-k8s-version-495121 kubelet[611]: E0929 13:18:15.638954     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-t2mql" podUID="993269b1-5535-4e85-a7c1-3f23bd738880"
	Sep 29 13:18:19 old-k8s-version-495121 kubelet[611]: I0929 13:18:19.639279     611 scope.go:117] "RemoveContainer" containerID="e3b2666b123b9b0f81a7035bbb94e1aea9ad2b981195663623ef0bbe30cc8677"
	Sep 29 13:18:19 old-k8s-version-495121 kubelet[611]: E0929 13:18:19.639672     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-5f27f_kubernetes-dashboard(7eaf1de6-b6e8-45f6-a547-6633ab9ee5a4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5f27f" podUID="7eaf1de6-b6e8-45f6-a547-6633ab9ee5a4"
	Sep 29 13:18:23 old-k8s-version-495121 kubelet[611]: E0929 13:18:23.640032     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-r4kbj" podUID="60e7b4a4-451c-4c51-ae68-8a626ab1e1a7"
	Sep 29 13:18:28 old-k8s-version-495121 kubelet[611]: E0929 13:18:28.639794     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-t2mql" podUID="993269b1-5535-4e85-a7c1-3f23bd738880"
	Sep 29 13:18:30 old-k8s-version-495121 kubelet[611]: I0929 13:18:30.638506     611 scope.go:117] "RemoveContainer" containerID="e3b2666b123b9b0f81a7035bbb94e1aea9ad2b981195663623ef0bbe30cc8677"
	Sep 29 13:18:30 old-k8s-version-495121 kubelet[611]: E0929 13:18:30.638785     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-5f27f_kubernetes-dashboard(7eaf1de6-b6e8-45f6-a547-6633ab9ee5a4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5f27f" podUID="7eaf1de6-b6e8-45f6-a547-6633ab9ee5a4"
	Sep 29 13:18:38 old-k8s-version-495121 kubelet[611]: E0929 13:18:38.639031     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-r4kbj" podUID="60e7b4a4-451c-4c51-ae68-8a626ab1e1a7"
	Sep 29 13:18:40 old-k8s-version-495121 kubelet[611]: E0929 13:18:40.639391     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-t2mql" podUID="993269b1-5535-4e85-a7c1-3f23bd738880"
	Sep 29 13:18:42 old-k8s-version-495121 kubelet[611]: I0929 13:18:42.639131     611 scope.go:117] "RemoveContainer" containerID="e3b2666b123b9b0f81a7035bbb94e1aea9ad2b981195663623ef0bbe30cc8677"
	Sep 29 13:18:42 old-k8s-version-495121 kubelet[611]: E0929 13:18:42.639400     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-5f27f_kubernetes-dashboard(7eaf1de6-b6e8-45f6-a547-6633ab9ee5a4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5f27f" podUID="7eaf1de6-b6e8-45f6-a547-6633ab9ee5a4"
	Sep 29 13:18:50 old-k8s-version-495121 kubelet[611]: E0929 13:18:50.639792     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-r4kbj" podUID="60e7b4a4-451c-4c51-ae68-8a626ab1e1a7"
	Sep 29 13:18:51 old-k8s-version-495121 kubelet[611]: E0929 13:18:51.639666     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-t2mql" podUID="993269b1-5535-4e85-a7c1-3f23bd738880"
	Sep 29 13:18:54 old-k8s-version-495121 kubelet[611]: I0929 13:18:54.638643     611 scope.go:117] "RemoveContainer" containerID="e3b2666b123b9b0f81a7035bbb94e1aea9ad2b981195663623ef0bbe30cc8677"
	Sep 29 13:18:54 old-k8s-version-495121 kubelet[611]: E0929 13:18:54.638996     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-5f27f_kubernetes-dashboard(7eaf1de6-b6e8-45f6-a547-6633ab9ee5a4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5f27f" podUID="7eaf1de6-b6e8-45f6-a547-6633ab9ee5a4"
	Sep 29 13:19:03 old-k8s-version-495121 kubelet[611]: E0929 13:19:03.639414     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-r4kbj" podUID="60e7b4a4-451c-4c51-ae68-8a626ab1e1a7"
	Sep 29 13:19:06 old-k8s-version-495121 kubelet[611]: E0929 13:19:06.639386     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-t2mql" podUID="993269b1-5535-4e85-a7c1-3f23bd738880"
	Sep 29 13:19:08 old-k8s-version-495121 kubelet[611]: I0929 13:19:08.638892     611 scope.go:117] "RemoveContainer" containerID="e3b2666b123b9b0f81a7035bbb94e1aea9ad2b981195663623ef0bbe30cc8677"
	Sep 29 13:19:08 old-k8s-version-495121 kubelet[611]: E0929 13:19:08.639252     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-5f27f_kubernetes-dashboard(7eaf1de6-b6e8-45f6-a547-6633ab9ee5a4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5f27f" podUID="7eaf1de6-b6e8-45f6-a547-6633ab9ee5a4"
	
	
	==> storage-provisioner [751d011dd2b4f7bfde9c11190bfefc1f4af16becd517aa7aef474ca37e4713a9] <==
	I0929 13:09:34.464775       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 13:10:04.468263       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a038fdcf70d29567f594b699e32c1ff935cbb3227c70709b73bc8579cb052b3b] <==
	I0929 13:10:18.722633       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0929 13:10:18.731313       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0929 13:10:18.731354       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0929 13:10:36.129377       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0929 13:10:36.129568       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-495121_0e869ed1-b387-4391-9538-34bd9f4b72bb!
	I0929 13:10:36.129532       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"99a8355f-5957-46bd-a556-a580d532ae77", APIVersion:"v1", ResourceVersion:"714", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-495121_0e869ed1-b387-4391-9538-34bd9f4b72bb became leader
	I0929 13:10:36.229884       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-495121_0e869ed1-b387-4391-9538-34bd9f4b72bb!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-495121 -n old-k8s-version-495121
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-495121 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-57f55c9bc5-t2mql kubernetes-dashboard-8694d4445c-r4kbj
helpers_test.go:282: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-495121 describe pod metrics-server-57f55c9bc5-t2mql kubernetes-dashboard-8694d4445c-r4kbj
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-495121 describe pod metrics-server-57f55c9bc5-t2mql kubernetes-dashboard-8694d4445c-r4kbj: exit status 1 (63.536366ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-t2mql" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-r4kbj" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context old-k8s-version-495121 describe pod metrics-server-57f55c9bc5-t2mql kubernetes-dashboard-8694d4445c-r4kbj: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rfr6g" [93751269-985a-4d2f-9768-407c72ae300b] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:337: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-644246 -n embed-certs-644246
start_stop_delete_test.go:272: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-29 13:19:24.003854256 +0000 UTC m=+3728.587877447
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context embed-certs-644246 describe po kubernetes-dashboard-855c9754f9-rfr6g -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context embed-certs-644246 describe po kubernetes-dashboard-855c9754f9-rfr6g -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-rfr6g
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             embed-certs-644246/192.168.103.2
Start Time:       Mon, 29 Sep 2025 13:09:52 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r9znr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-r9znr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m32s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rfr6g to embed-certs-644246
Warning  Failed     6m35s (x5 over 9m29s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m35s (x5 over 9m29s)   kubelet            Error: ErrImagePull
Warning  Failed     4m20s (x19 over 9m29s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m9s (x20 over 9m29s)   kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Normal   Pulling    3m54s (x6 over 9m32s)   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context embed-certs-644246 logs kubernetes-dashboard-855c9754f9-rfr6g -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context embed-certs-644246 logs kubernetes-dashboard-855c9754f9-rfr6g -n kubernetes-dashboard: exit status 1 (69.14877ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-rfr6g" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context embed-certs-644246 logs kubernetes-dashboard-855c9754f9-rfr6g -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-644246
helpers_test.go:243: (dbg) docker inspect embed-certs-644246:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8a578dd7f9fae93da8cc4d478e48cf195e16fd681586946c95adad77159e0c45",
	        "Created": "2025-09-29T13:08:39.203437343Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1425320,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T13:09:39.085003932Z",
	            "FinishedAt": "2025-09-29T13:09:38.261763457Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/8a578dd7f9fae93da8cc4d478e48cf195e16fd681586946c95adad77159e0c45/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8a578dd7f9fae93da8cc4d478e48cf195e16fd681586946c95adad77159e0c45/hostname",
	        "HostsPath": "/var/lib/docker/containers/8a578dd7f9fae93da8cc4d478e48cf195e16fd681586946c95adad77159e0c45/hosts",
	        "LogPath": "/var/lib/docker/containers/8a578dd7f9fae93da8cc4d478e48cf195e16fd681586946c95adad77159e0c45/8a578dd7f9fae93da8cc4d478e48cf195e16fd681586946c95adad77159e0c45-json.log",
	        "Name": "/embed-certs-644246",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-644246:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-644246",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8a578dd7f9fae93da8cc4d478e48cf195e16fd681586946c95adad77159e0c45",
	                "LowerDir": "/var/lib/docker/overlay2/91436ff8e9a5aab2a206215824f18fd75369454e6d32e8226161eb99175b60de-init/diff:/var/lib/docker/overlay2/fbd0ff8837aea1062458ef3b6c2ff01f7caaf77470820d108a1f7ca188c98aa7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/91436ff8e9a5aab2a206215824f18fd75369454e6d32e8226161eb99175b60de/merged",
	                "UpperDir": "/var/lib/docker/overlay2/91436ff8e9a5aab2a206215824f18fd75369454e6d32e8226161eb99175b60de/diff",
	                "WorkDir": "/var/lib/docker/overlay2/91436ff8e9a5aab2a206215824f18fd75369454e6d32e8226161eb99175b60de/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-644246",
	                "Source": "/var/lib/docker/volumes/embed-certs-644246/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-644246",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-644246",
	                "name.minikube.sigs.k8s.io": "embed-certs-644246",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ed260214a0ab2fdd1689f64c200597a193992633e9689ea01a8bcc875fa1f3e9",
	            "SandboxKey": "/var/run/docker/netns/ed260214a0ab",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33606"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33607"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33610"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33608"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33609"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-644246": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:dc:72:bf:a7:95",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4aa0b62087589daf1b8a6964a178a4f629a9ee977cbac243d7975189f723e4fb",
	                    "EndpointID": "1325dd8031237a00772a0b4cf5a9a39052e4518aca1060d9c1e37c2d9df360d8",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-644246",
	                        "8a578dd7f9fa"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-644246 -n embed-certs-644246
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-644246 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-644246 logs -n 25: (1.541168859s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ ssh     │ -p bridge-321209 sudo cri-dockerd --version                                                                                                                                                                                                         │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ ssh     │ -p bridge-321209 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ ssh     │ -p bridge-321209 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ ssh     │ -p bridge-321209 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ ssh     │ -p bridge-321209 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ ssh     │ -p bridge-321209 sudo containerd config dump                                                                                                                                                                                                        │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ ssh     │ -p bridge-321209 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │                     │
	│ ssh     │ -p bridge-321209 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ ssh     │ -p bridge-321209 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ ssh     │ -p bridge-321209 sudo crio config                                                                                                                                                                                                                   │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ delete  │ -p bridge-321209                                                                                                                                                                                                                                    │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ delete  │ -p disable-driver-mounts-849793                                                                                                                                                                                                                     │ disable-driver-mounts-849793 │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-495121 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-495121       │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ start   │ -p no-preload-554589 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                       │ no-preload-554589            │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:10 UTC │
	│ stop    │ -p old-k8s-version-495121 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-495121       │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-495121 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-495121       │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ start   │ -p old-k8s-version-495121 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-495121       │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:10 UTC │
	│ addons  │ enable metrics-server -p embed-certs-644246 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-644246           │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ stop    │ -p embed-certs-644246 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-644246           │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ addons  │ enable dashboard -p embed-certs-644246 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-644246           │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ start   │ -p embed-certs-644246 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                        │ embed-certs-644246           │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:10 UTC │
	│ addons  │ enable metrics-server -p no-preload-554589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-554589            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ stop    │ -p no-preload-554589 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-554589            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ addons  │ enable dashboard -p no-preload-554589 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-554589            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ start   │ -p no-preload-554589 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                       │ no-preload-554589            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:11 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 13:10:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 13:10:35.887390 1430964 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:10:35.887528 1430964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:10:35.887538 1430964 out.go:374] Setting ErrFile to fd 2...
	I0929 13:10:35.887543 1430964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:10:35.887766 1430964 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1097891/.minikube/bin
	I0929 13:10:35.888286 1430964 out.go:368] Setting JSON to false
	I0929 13:10:35.889692 1430964 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":21173,"bootTime":1759130263,"procs":333,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 13:10:35.889804 1430964 start.go:140] virtualization: kvm guest
	I0929 13:10:35.892010 1430964 out.go:179] * [no-preload-554589] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 13:10:35.893293 1430964 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 13:10:35.893300 1430964 notify.go:220] Checking for updates...
	I0929 13:10:35.895737 1430964 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 13:10:35.896838 1430964 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 13:10:35.897902 1430964 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1097891/.minikube
	I0929 13:10:35.898915 1430964 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 13:10:35.899947 1430964 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 13:10:35.901594 1430964 config.go:182] Loaded profile config "no-preload-554589": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 13:10:35.902157 1430964 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 13:10:35.926890 1430964 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 13:10:35.926997 1430964 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:10:35.983850 1430964 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 13:10:35.973663238 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:10:35.983999 1430964 docker.go:318] overlay module found
	I0929 13:10:35.986231 1430964 out.go:179] * Using the docker driver based on existing profile
	I0929 13:10:35.987170 1430964 start.go:304] selected driver: docker
	I0929 13:10:35.987184 1430964 start.go:924] validating driver "docker" against &{Name:no-preload-554589 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-554589 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:10:35.987271 1430964 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 13:10:35.987858 1430964 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:10:36.048316 1430964 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 13:10:36.037327075 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:10:36.048601 1430964 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:10:36.048632 1430964 cni.go:84] Creating CNI manager for ""
	I0929 13:10:36.048678 1430964 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0929 13:10:36.048716 1430964 start.go:348] cluster config:
	{Name:no-preload-554589 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-554589 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:10:36.050290 1430964 out.go:179] * Starting "no-preload-554589" primary control-plane node in "no-preload-554589" cluster
	I0929 13:10:36.051338 1430964 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0929 13:10:36.052310 1430964 out.go:179] * Pulling base image v0.0.48 ...
	I0929 13:10:36.053168 1430964 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0929 13:10:36.053271 1430964 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 13:10:36.053310 1430964 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589/config.json ...
	I0929 13:10:36.053485 1430964 cache.go:107] acquiring lock: {Name:mk0a24f1bf5eff836d398ee592530f35f71c0ee4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:10:36.053482 1430964 cache.go:107] acquiring lock: {Name:mk71aec952ee722ffcd940a39d5e958f64a61352 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:10:36.053585 1430964 cache.go:107] acquiring lock: {Name:mk34c1dbc7ce4b55aef58920d74b57fccb4f6138 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:10:36.053579 1430964 cache.go:107] acquiring lock: {Name:mke82396d3d70feba1e14470b5460d60995ab461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:10:36.053623 1430964 cache.go:115] /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0929 13:10:36.053595 1430964 cache.go:107] acquiring lock: {Name:mkbf689face8cd4cbe1088f8d16d264b311f5a05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:10:36.053636 1430964 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 175.418µs
	I0929 13:10:36.053653 1430964 cache.go:115] /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I0929 13:10:36.053655 1430964 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0929 13:10:36.053662 1430964 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 80.179µs
	I0929 13:10:36.053671 1430964 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I0929 13:10:36.053681 1430964 cache.go:107] acquiring lock: {Name:mk3476c105048b10b0947812a968956108eab0e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:10:36.053739 1430964 cache.go:115] /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I0929 13:10:36.053734 1430964 cache.go:107] acquiring lock: {Name:mka7f06997e7f1d40489000070294d8bfac768af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:10:36.053755 1430964 cache.go:115] /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 exists
	I0929 13:10:36.053752 1430964 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 234.346µs
	I0929 13:10:36.053771 1430964 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I0929 13:10:36.053770 1430964 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.0" -> "/home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0" took 233.678µs
	I0929 13:10:36.053804 1430964 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.0 -> /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 succeeded
	I0929 13:10:36.053720 1430964 cache.go:115] /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 exists
	I0929 13:10:36.053827 1430964 cache.go:115] /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 exists
	I0929 13:10:36.053833 1430964 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.0" -> "/home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0" took 365.093µs
	I0929 13:10:36.053851 1430964 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.0 -> /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 succeeded
	I0929 13:10:36.053850 1430964 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.0" -> "/home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0" took 238.143µs
	I0929 13:10:36.053859 1430964 cache.go:115] /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I0929 13:10:36.053879 1430964 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 191.128µs
	I0929 13:10:36.053891 1430964 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I0929 13:10:36.053864 1430964 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.0 -> /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 succeeded
	I0929 13:10:36.053591 1430964 cache.go:107] acquiring lock: {Name:mk385a135f933810a76b1272dffaf4891eef10f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:10:36.054019 1430964 cache.go:115] /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 exists
	I0929 13:10:36.054027 1430964 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.0" -> "/home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0" took 443.615µs
	I0929 13:10:36.054035 1430964 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.0 -> /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 succeeded
	I0929 13:10:36.054043 1430964 cache.go:87] Successfully saved all images to host disk.
	I0929 13:10:36.075042 1430964 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 13:10:36.075061 1430964 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 13:10:36.075077 1430964 cache.go:232] Successfully downloaded all kic artifacts
	I0929 13:10:36.075108 1430964 start.go:360] acquireMachinesLock for no-preload-554589: {Name:mk5ff8f08413e283845bfb46ae253fb42cbb2a51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:10:36.075172 1430964 start.go:364] duration metric: took 44.583µs to acquireMachinesLock for "no-preload-554589"
	I0929 13:10:36.075206 1430964 start.go:96] Skipping create...Using existing machine configuration
	I0929 13:10:36.075218 1430964 fix.go:54] fixHost starting: 
	I0929 13:10:36.075468 1430964 cli_runner.go:164] Run: docker container inspect no-preload-554589 --format={{.State.Status}}
	I0929 13:10:36.094782 1430964 fix.go:112] recreateIfNeeded on no-preload-554589: state=Stopped err=<nil>
	W0929 13:10:36.094818 1430964 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 13:10:36.096594 1430964 out.go:252] * Restarting existing docker container for "no-preload-554589" ...
	I0929 13:10:36.096656 1430964 cli_runner.go:164] Run: docker start no-preload-554589
	I0929 13:10:36.348329 1430964 cli_runner.go:164] Run: docker container inspect no-preload-554589 --format={{.State.Status}}
	I0929 13:10:36.367780 1430964 kic.go:430] container "no-preload-554589" state is running.
	I0929 13:10:36.368218 1430964 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-554589
	I0929 13:10:36.387825 1430964 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589/config.json ...
	I0929 13:10:36.388091 1430964 machine.go:93] provisionDockerMachine start ...
	I0929 13:10:36.388191 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:36.407360 1430964 main.go:141] libmachine: Using SSH client type: native
	I0929 13:10:36.407692 1430964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33611 <nil> <nil>}
	I0929 13:10:36.407711 1430964 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 13:10:36.408408 1430964 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40280->127.0.0.1:33611: read: connection reset by peer
	I0929 13:10:39.547089 1430964 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-554589
	
	I0929 13:10:39.547121 1430964 ubuntu.go:182] provisioning hostname "no-preload-554589"
	I0929 13:10:39.547190 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:39.564551 1430964 main.go:141] libmachine: Using SSH client type: native
	I0929 13:10:39.564843 1430964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33611 <nil> <nil>}
	I0929 13:10:39.564862 1430964 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-554589 && echo "no-preload-554589" | sudo tee /etc/hostname
	I0929 13:10:39.715451 1430964 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-554589
	
	I0929 13:10:39.715532 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:39.733400 1430964 main.go:141] libmachine: Using SSH client type: native
	I0929 13:10:39.733671 1430964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33611 <nil> <nil>}
	I0929 13:10:39.733690 1430964 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-554589' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-554589/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-554589' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 13:10:39.872701 1430964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 13:10:39.872728 1430964 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-1097891/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-1097891/.minikube}
	I0929 13:10:39.872749 1430964 ubuntu.go:190] setting up certificates
	I0929 13:10:39.872759 1430964 provision.go:84] configureAuth start
	I0929 13:10:39.872813 1430964 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-554589
	I0929 13:10:39.891390 1430964 provision.go:143] copyHostCerts
	I0929 13:10:39.891464 1430964 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.pem, removing ...
	I0929 13:10:39.891484 1430964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.pem
	I0929 13:10:39.891561 1430964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.pem (1078 bytes)
	I0929 13:10:39.891693 1430964 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1097891/.minikube/cert.pem, removing ...
	I0929 13:10:39.891709 1430964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1097891/.minikube/cert.pem
	I0929 13:10:39.891752 1430964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-1097891/.minikube/cert.pem (1123 bytes)
	I0929 13:10:39.891910 1430964 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1097891/.minikube/key.pem, removing ...
	I0929 13:10:39.891923 1430964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1097891/.minikube/key.pem
	I0929 13:10:39.891972 1430964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-1097891/.minikube/key.pem (1679 bytes)
	I0929 13:10:39.892068 1430964 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca-key.pem org=jenkins.no-preload-554589 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-554589]
	I0929 13:10:39.939438 1430964 provision.go:177] copyRemoteCerts
	I0929 13:10:39.939504 1430964 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 13:10:39.939548 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:39.956799 1430964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33611 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/no-preload-554589/id_rsa Username:docker}
	I0929 13:10:40.055067 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 13:10:40.080134 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0929 13:10:40.104611 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 13:10:40.129350 1430964 provision.go:87] duration metric: took 256.573931ms to configureAuth
	I0929 13:10:40.129378 1430964 ubuntu.go:206] setting minikube options for container-runtime
	I0929 13:10:40.129599 1430964 config.go:182] Loaded profile config "no-preload-554589": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 13:10:40.129612 1430964 machine.go:96] duration metric: took 3.741506785s to provisionDockerMachine
	I0929 13:10:40.129622 1430964 start.go:293] postStartSetup for "no-preload-554589" (driver="docker")
	I0929 13:10:40.129637 1430964 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 13:10:40.129690 1430964 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 13:10:40.129756 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:40.147536 1430964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33611 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/no-preload-554589/id_rsa Username:docker}
	I0929 13:10:40.246335 1430964 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 13:10:40.249785 1430964 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 13:10:40.249812 1430964 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 13:10:40.249819 1430964 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 13:10:40.249826 1430964 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 13:10:40.249835 1430964 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1097891/.minikube/addons for local assets ...
	I0929 13:10:40.249880 1430964 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1097891/.minikube/files for local assets ...
	I0929 13:10:40.249948 1430964 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem -> 11014942.pem in /etc/ssl/certs
	I0929 13:10:40.250070 1430964 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 13:10:40.259126 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem --> /etc/ssl/certs/11014942.pem (1708 bytes)
	I0929 13:10:40.284860 1430964 start.go:296] duration metric: took 155.217314ms for postStartSetup
	I0929 13:10:40.284948 1430964 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 13:10:40.285044 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:40.302550 1430964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33611 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/no-preload-554589/id_rsa Username:docker}
	I0929 13:10:40.396065 1430964 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 13:10:40.400658 1430964 fix.go:56] duration metric: took 4.325395629s for fixHost
	I0929 13:10:40.400685 1430964 start.go:83] releasing machines lock for "no-preload-554589", held for 4.325500319s
	I0929 13:10:40.400745 1430964 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-554589
	I0929 13:10:40.419253 1430964 ssh_runner.go:195] Run: cat /version.json
	I0929 13:10:40.419302 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:40.419316 1430964 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 13:10:40.419372 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:40.437334 1430964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33611 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/no-preload-554589/id_rsa Username:docker}
	I0929 13:10:40.437565 1430964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33611 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/no-preload-554589/id_rsa Username:docker}
	I0929 13:10:40.530040 1430964 ssh_runner.go:195] Run: systemctl --version
	I0929 13:10:40.618702 1430964 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 13:10:40.623606 1430964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 13:10:40.643627 1430964 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 13:10:40.643704 1430964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 13:10:40.655028 1430964 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 13:10:40.655056 1430964 start.go:495] detecting cgroup driver to use...
	I0929 13:10:40.655090 1430964 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 13:10:40.655143 1430964 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0929 13:10:40.669887 1430964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 13:10:40.682685 1430964 docker.go:218] disabling cri-docker service (if available) ...
	I0929 13:10:40.682743 1430964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 13:10:40.697781 1430964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 13:10:40.710870 1430964 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 13:10:40.781641 1430964 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 13:10:40.850419 1430964 docker.go:234] disabling docker service ...
	I0929 13:10:40.850476 1430964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 13:10:40.864573 1430964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 13:10:40.877583 1430964 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 13:10:40.947404 1430964 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 13:10:41.013464 1430964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 13:10:41.025589 1430964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 13:10:41.043594 1430964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 13:10:41.054426 1430964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 13:10:41.064879 1430964 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0929 13:10:41.064945 1430964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0929 13:10:41.075614 1430964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 13:10:41.085902 1430964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 13:10:41.096231 1430964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 13:10:41.106375 1430964 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 13:10:41.116101 1430964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 13:10:41.126585 1430964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 13:10:41.136683 1430964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 13:10:41.147471 1430964 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 13:10:41.156376 1430964 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 13:10:41.164882 1430964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:10:41.232125 1430964 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 13:10:41.336741 1430964 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0929 13:10:41.336815 1430964 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0929 13:10:41.341097 1430964 start.go:563] Will wait 60s for crictl version
	I0929 13:10:41.341150 1430964 ssh_runner.go:195] Run: which crictl
	I0929 13:10:41.344984 1430964 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 13:10:41.381858 1430964 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0929 13:10:41.381934 1430964 ssh_runner.go:195] Run: containerd --version
	I0929 13:10:41.407752 1430964 ssh_runner.go:195] Run: containerd --version
	I0929 13:10:41.435044 1430964 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0929 13:10:41.436030 1430964 cli_runner.go:164] Run: docker network inspect no-preload-554589 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:10:41.453074 1430964 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0929 13:10:41.457289 1430964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:10:41.469647 1430964 kubeadm.go:875] updating cluster {Name:no-preload-554589 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-554589 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 13:10:41.469759 1430964 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0929 13:10:41.469801 1430964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 13:10:41.505893 1430964 containerd.go:627] all images are preloaded for containerd runtime.
	I0929 13:10:41.505917 1430964 cache_images.go:85] Images are preloaded, skipping loading
	I0929 13:10:41.505925 1430964 kubeadm.go:926] updating node { 192.168.94.2 8443 v1.34.0 containerd true true} ...
	I0929 13:10:41.506080 1430964 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-554589 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:no-preload-554589 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 13:10:41.506140 1430964 ssh_runner.go:195] Run: sudo crictl info
	I0929 13:10:41.542471 1430964 cni.go:84] Creating CNI manager for ""
	I0929 13:10:41.542493 1430964 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0929 13:10:41.542504 1430964 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 13:10:41.542530 1430964 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-554589 NodeName:no-preload-554589 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 13:10:41.542668 1430964 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-554589"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 13:10:41.542745 1430964 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 13:10:41.552925 1430964 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 13:10:41.553026 1430964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 13:10:41.562817 1430964 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0929 13:10:41.581742 1430964 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 13:10:41.600851 1430964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I0929 13:10:41.620107 1430964 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0929 13:10:41.623949 1430964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:10:41.636268 1430964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:10:41.709798 1430964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:10:41.732612 1430964 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589 for IP: 192.168.94.2
	I0929 13:10:41.732634 1430964 certs.go:194] generating shared ca certs ...
	I0929 13:10:41.732655 1430964 certs.go:226] acquiring lock for ca certs: {Name:mk80f04796163f71154dbe6468cabd937b3d9c9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:10:41.732829 1430964 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.key
	I0929 13:10:41.732882 1430964 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/proxy-client-ca.key
	I0929 13:10:41.732897 1430964 certs.go:256] generating profile certs ...
	I0929 13:10:41.733042 1430964 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589/client.key
	I0929 13:10:41.733119 1430964 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589/apiserver.key.98402d2c
	I0929 13:10:41.733170 1430964 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589/proxy-client.key
	I0929 13:10:41.733316 1430964 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/1101494.pem (1338 bytes)
	W0929 13:10:41.733355 1430964 certs.go:480] ignoring /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/1101494_empty.pem, impossibly tiny 0 bytes
	I0929 13:10:41.733367 1430964 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 13:10:41.733400 1430964 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem (1078 bytes)
	I0929 13:10:41.733427 1430964 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem (1123 bytes)
	I0929 13:10:41.733467 1430964 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/key.pem (1679 bytes)
	I0929 13:10:41.733519 1430964 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem (1708 bytes)
	I0929 13:10:41.734337 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 13:10:41.765009 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I0929 13:10:41.793504 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 13:10:41.827789 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 13:10:41.857035 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0929 13:10:41.884766 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 13:10:41.911756 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 13:10:41.941605 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 13:10:41.967516 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 13:10:41.992710 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/1101494.pem --> /usr/share/ca-certificates/1101494.pem (1338 bytes)
	I0929 13:10:42.018319 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem --> /usr/share/ca-certificates/11014942.pem (1708 bytes)
	I0929 13:10:42.042856 1430964 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 13:10:42.060923 1430964 ssh_runner.go:195] Run: openssl version
	I0929 13:10:42.066444 1430964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 13:10:42.076065 1430964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:10:42.079599 1430964 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 12:18 /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:10:42.079650 1430964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:10:42.086452 1430964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 13:10:42.095408 1430964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1101494.pem && ln -fs /usr/share/ca-certificates/1101494.pem /etc/ssl/certs/1101494.pem"
	I0929 13:10:42.105262 1430964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1101494.pem
	I0929 13:10:42.108926 1430964 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 12:23 /usr/share/ca-certificates/1101494.pem
	I0929 13:10:42.108999 1430964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1101494.pem
	I0929 13:10:42.115656 1430964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1101494.pem /etc/ssl/certs/51391683.0"
	I0929 13:10:42.124799 1430964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11014942.pem && ln -fs /usr/share/ca-certificates/11014942.pem /etc/ssl/certs/11014942.pem"
	I0929 13:10:42.134401 1430964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11014942.pem
	I0929 13:10:42.137842 1430964 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 12:23 /usr/share/ca-certificates/11014942.pem
	I0929 13:10:42.137890 1430964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11014942.pem
	I0929 13:10:42.145059 1430964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11014942.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 13:10:42.154717 1430964 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 13:10:42.158651 1430964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 13:10:42.165748 1430964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 13:10:42.172341 1430964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 13:10:42.178784 1430964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 13:10:42.185439 1430964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 13:10:42.192086 1430964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 13:10:42.198506 1430964 kubeadm.go:392] StartCluster: {Name:no-preload-554589 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-554589 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:10:42.198617 1430964 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0929 13:10:42.198653 1430964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 13:10:42.235388 1430964 cri.go:89] found id: "3aa4e89ae916232c207fa3b1b9f357dad149bbb0d0a5d1cd2b42c27cad6374b2"
	I0929 13:10:42.235408 1430964 cri.go:89] found id: "21b59ec52c2f189cce4c1c71122fb539bab5404609e8d49bc9bc242623c98f2d"
	I0929 13:10:42.235417 1430964 cri.go:89] found id: "fe92b189cf883cbe93d9474127d870f453d75c020b22114de99123f9f623f3a1"
	I0929 13:10:42.235421 1430964 cri.go:89] found id: "86d180d3fafecd80e755e727e2f50ad02bd1ea0707d33e41b1e2c298740f82b2"
	I0929 13:10:42.235426 1430964 cri.go:89] found id: "f157b54ee5632361a5614f30127b6f5dfc89ff0daa05de53a9f5257c9ebec23a"
	I0929 13:10:42.235429 1430964 cri.go:89] found id: "8c5c1254cf9381b1212b778b0bea8cccf2cd1cd3a2b9653e31070bc574cbe9d7"
	I0929 13:10:42.235431 1430964 cri.go:89] found id: "448fabba6fe89ac66791993182ef471d034e865da39b82ac763c5f6f70777c96"
	I0929 13:10:42.235434 1430964 cri.go:89] found id: "3e59ee92e127e9ebe23e71830eaec1c6942debeff812ea825dca6bd1ca6af1b8"
	I0929 13:10:42.235436 1430964 cri.go:89] found id: ""
	I0929 13:10:42.235495 1430964 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0929 13:10:42.250871 1430964 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-29T13:10:42Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0929 13:10:42.250953 1430964 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 13:10:42.263482 1430964 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 13:10:42.263515 1430964 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 13:10:42.263568 1430964 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 13:10:42.276428 1430964 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 13:10:42.277682 1430964 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-554589" does not appear in /home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 13:10:42.278693 1430964 kubeconfig.go:62] /home/jenkins/minikube-integration/21652-1097891/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-554589" cluster setting kubeconfig missing "no-preload-554589" context setting]
	I0929 13:10:42.280300 1430964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/kubeconfig: {Name:mk343611c88fd6ad36810bb377f9a0ca463784db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:10:42.282772 1430964 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 13:10:42.295661 1430964 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.94.2
	I0929 13:10:42.295700 1430964 kubeadm.go:593] duration metric: took 32.178175ms to restartPrimaryControlPlane
	I0929 13:10:42.295712 1430964 kubeadm.go:394] duration metric: took 97.214108ms to StartCluster
	I0929 13:10:42.295732 1430964 settings.go:142] acquiring lock: {Name:mk967ab7b412f5ea13a8bdbc3d08e00d0ec4417f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:10:42.295792 1430964 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 13:10:42.298396 1430964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/kubeconfig: {Name:mk343611c88fd6ad36810bb377f9a0ca463784db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:10:42.298619 1430964 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0929 13:10:42.298702 1430964 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 13:10:42.298790 1430964 addons.go:69] Setting storage-provisioner=true in profile "no-preload-554589"
	I0929 13:10:42.298805 1430964 addons.go:238] Setting addon storage-provisioner=true in "no-preload-554589"
	W0929 13:10:42.298811 1430964 addons.go:247] addon storage-provisioner should already be in state true
	I0929 13:10:42.298837 1430964 host.go:66] Checking if "no-preload-554589" exists ...
	I0929 13:10:42.298829 1430964 addons.go:69] Setting default-storageclass=true in profile "no-preload-554589"
	I0929 13:10:42.298848 1430964 config.go:182] Loaded profile config "no-preload-554589": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 13:10:42.298857 1430964 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-554589"
	I0929 13:10:42.298870 1430964 addons.go:69] Setting dashboard=true in profile "no-preload-554589"
	I0929 13:10:42.298841 1430964 addons.go:69] Setting metrics-server=true in profile "no-preload-554589"
	I0929 13:10:42.298896 1430964 addons.go:238] Setting addon dashboard=true in "no-preload-554589"
	W0929 13:10:42.298906 1430964 addons.go:247] addon dashboard should already be in state true
	I0929 13:10:42.298908 1430964 addons.go:238] Setting addon metrics-server=true in "no-preload-554589"
	W0929 13:10:42.298917 1430964 addons.go:247] addon metrics-server should already be in state true
	I0929 13:10:42.298940 1430964 host.go:66] Checking if "no-preload-554589" exists ...
	I0929 13:10:42.298942 1430964 host.go:66] Checking if "no-preload-554589" exists ...
	I0929 13:10:42.299211 1430964 cli_runner.go:164] Run: docker container inspect no-preload-554589 --format={{.State.Status}}
	I0929 13:10:42.299337 1430964 cli_runner.go:164] Run: docker container inspect no-preload-554589 --format={{.State.Status}}
	I0929 13:10:42.299397 1430964 cli_runner.go:164] Run: docker container inspect no-preload-554589 --format={{.State.Status}}
	I0929 13:10:42.299410 1430964 cli_runner.go:164] Run: docker container inspect no-preload-554589 --format={{.State.Status}}
	I0929 13:10:42.301050 1430964 out.go:179] * Verifying Kubernetes components...
	I0929 13:10:42.305464 1430964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:10:42.327596 1430964 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 13:10:42.327632 1430964 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 13:10:42.329217 1430964 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 13:10:42.329249 1430964 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 13:10:42.329324 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:42.329326 1430964 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 13:10:42.329804 1430964 addons.go:238] Setting addon default-storageclass=true in "no-preload-554589"
	W0929 13:10:42.329826 1430964 addons.go:247] addon default-storageclass should already be in state true
	I0929 13:10:42.329858 1430964 host.go:66] Checking if "no-preload-554589" exists ...
	I0929 13:10:42.330382 1430964 cli_runner.go:164] Run: docker container inspect no-preload-554589 --format={{.State.Status}}
	I0929 13:10:42.330802 1430964 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 13:10:42.330820 1430964 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 13:10:42.330878 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:42.332253 1430964 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 13:10:42.333200 1430964 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:10:42.333216 1430964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 13:10:42.333276 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:42.358580 1430964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33611 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/no-preload-554589/id_rsa Username:docker}
	I0929 13:10:42.361394 1430964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33611 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/no-preload-554589/id_rsa Username:docker}
	I0929 13:10:42.365289 1430964 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 13:10:42.366065 1430964 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 13:10:42.366168 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:42.369057 1430964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33611 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/no-preload-554589/id_rsa Username:docker}
	I0929 13:10:42.398458 1430964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33611 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/no-preload-554589/id_rsa Username:docker}
	I0929 13:10:42.465331 1430964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:10:42.485873 1430964 node_ready.go:35] waiting up to 6m0s for node "no-preload-554589" to be "Ready" ...
	I0929 13:10:42.502155 1430964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:10:42.502885 1430964 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 13:10:42.502905 1430964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 13:10:42.517069 1430964 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 13:10:42.517097 1430964 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 13:10:42.522944 1430964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 13:10:42.538079 1430964 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 13:10:42.538106 1430964 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 13:10:42.545200 1430964 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 13:10:42.545228 1430964 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 13:10:42.570649 1430964 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 13:10:42.570677 1430964 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 13:10:42.580495 1430964 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 13:10:42.580521 1430964 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 13:10:42.609253 1430964 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 13:10:42.609285 1430964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0929 13:10:42.609512 1430964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0929 13:10:42.611191 1430964 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 13:10:42.611286 1430964 retry.go:31] will retry after 216.136192ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 13:10:42.634003 1430964 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 13:10:42.634107 1430964 retry.go:31] will retry after 293.519359ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 13:10:42.643987 1430964 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 13:10:42.644016 1430964 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0929 13:10:42.674561 1430964 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 13:10:42.674595 1430964 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0929 13:10:42.702843 1430964 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 13:10:42.702873 1430964 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 13:10:42.728750 1430964 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 13:10:42.728781 1430964 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 13:10:42.753082 1430964 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:10:42.753106 1430964 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 13:10:42.772698 1430964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:10:42.827939 1430964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:10:42.928554 1430964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0929 13:10:44.343592 1430964 node_ready.go:49] node "no-preload-554589" is "Ready"
	I0929 13:10:44.343632 1430964 node_ready.go:38] duration metric: took 1.857723898s for node "no-preload-554589" to be "Ready" ...
	I0929 13:10:44.343652 1430964 api_server.go:52] waiting for apiserver process to appear ...
	I0929 13:10:44.343710 1430964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 13:10:44.905717 1430964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.296158013s)
	I0929 13:10:44.905761 1430964 addons.go:479] Verifying addon metrics-server=true in "no-preload-554589"
	I0929 13:10:44.905844 1430964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.133099697s)
	I0929 13:10:44.907337 1430964 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-554589 addons enable metrics-server
	
	I0929 13:10:44.924947 1430964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.096962675s)
	I0929 13:10:44.925023 1430964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.996439442s)
	I0929 13:10:44.925049 1430964 api_server.go:72] duration metric: took 2.626402587s to wait for apiserver process to appear ...
	I0929 13:10:44.925058 1430964 api_server.go:88] waiting for apiserver healthz status ...
	I0929 13:10:44.925078 1430964 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0929 13:10:44.931266 1430964 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:10:44.931296 1430964 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:10:44.932611 1430964 out.go:179] * Enabled addons: metrics-server, dashboard, storage-provisioner, default-storageclass
	I0929 13:10:44.935452 1430964 addons.go:514] duration metric: took 2.636765019s for enable addons: enabled=[metrics-server dashboard storage-provisioner default-storageclass]
	I0929 13:10:45.426011 1430964 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0929 13:10:45.431277 1430964 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:10:45.431304 1430964 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:10:45.925804 1430964 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0929 13:10:45.931188 1430964 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:10:45.931222 1430964 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:10:46.425589 1430964 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0929 13:10:46.429986 1430964 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:10:46.430025 1430964 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:10:46.925637 1430964 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0929 13:10:46.929914 1430964 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0929 13:10:46.931143 1430964 api_server.go:141] control plane version: v1.34.0
	I0929 13:10:46.931168 1430964 api_server.go:131] duration metric: took 2.006103154s to wait for apiserver health ...
	I0929 13:10:46.931177 1430964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 13:10:46.934948 1430964 system_pods.go:59] 9 kube-system pods found
	I0929 13:10:46.935007 1430964 system_pods.go:61] "coredns-66bc5c9577-6cxff" [0ec3329b-47fd-402f-b8ec-d482d1f9b3c7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:10:46.935024 1430964 system_pods.go:61] "etcd-no-preload-554589" [6ae6f226-f3f5-4916-86ac-241f71542eec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:10:46.935032 1430964 system_pods.go:61] "kindnet-5z49c" [b688a8a1-9c75-42a1-be5a-48aff9897101] Running
	I0929 13:10:46.935040 1430964 system_pods.go:61] "kube-apiserver-no-preload-554589" [461eeb18-0997-4f04-b2f2-bd4f93ae16bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:10:46.935048 1430964 system_pods.go:61] "kube-controller-manager-no-preload-554589" [0095f296-2792-42a7-a015-f92d570fe2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:10:46.935052 1430964 system_pods.go:61] "kube-proxy-8kkxk" [0e984503-4cab-4fcf-a1cb-1684d2247f43] Running
	I0929 13:10:46.935064 1430964 system_pods.go:61] "kube-scheduler-no-preload-554589" [e47072a4-1f75-434b-aa66-477204025b6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:10:46.935071 1430964 system_pods.go:61] "metrics-server-746fcd58dc-45phl" [638c53d3-4825-4387-bb3a-56dd0be70464] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 13:10:46.935075 1430964 system_pods.go:61] "storage-provisioner" [af1e37d9-c313-4db3-a626-81403cf9ad15] Running
	I0929 13:10:46.935084 1430964 system_pods.go:74] duration metric: took 3.897674ms to wait for pod list to return data ...
	I0929 13:10:46.935098 1430964 default_sa.go:34] waiting for default service account to be created ...
	I0929 13:10:46.937529 1430964 default_sa.go:45] found service account: "default"
	I0929 13:10:46.937550 1430964 default_sa.go:55] duration metric: took 2.442128ms for default service account to be created ...
	I0929 13:10:46.937558 1430964 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 13:10:46.940321 1430964 system_pods.go:86] 9 kube-system pods found
	I0929 13:10:46.940347 1430964 system_pods.go:89] "coredns-66bc5c9577-6cxff" [0ec3329b-47fd-402f-b8ec-d482d1f9b3c7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:10:46.940355 1430964 system_pods.go:89] "etcd-no-preload-554589" [6ae6f226-f3f5-4916-86ac-241f71542eec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:10:46.940361 1430964 system_pods.go:89] "kindnet-5z49c" [b688a8a1-9c75-42a1-be5a-48aff9897101] Running
	I0929 13:10:46.940368 1430964 system_pods.go:89] "kube-apiserver-no-preload-554589" [461eeb18-0997-4f04-b2f2-bd4f93ae16bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:10:46.940375 1430964 system_pods.go:89] "kube-controller-manager-no-preload-554589" [0095f296-2792-42a7-a015-f92d570fe2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:10:46.940388 1430964 system_pods.go:89] "kube-proxy-8kkxk" [0e984503-4cab-4fcf-a1cb-1684d2247f43] Running
	I0929 13:10:46.940399 1430964 system_pods.go:89] "kube-scheduler-no-preload-554589" [e47072a4-1f75-434b-aa66-477204025b6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:10:46.940412 1430964 system_pods.go:89] "metrics-server-746fcd58dc-45phl" [638c53d3-4825-4387-bb3a-56dd0be70464] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 13:10:46.940419 1430964 system_pods.go:89] "storage-provisioner" [af1e37d9-c313-4db3-a626-81403cf9ad15] Running
	I0929 13:10:46.940427 1430964 system_pods.go:126] duration metric: took 2.863046ms to wait for k8s-apps to be running ...
	I0929 13:10:46.940441 1430964 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 13:10:46.940488 1430964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 13:10:46.954207 1430964 system_svc.go:56] duration metric: took 13.760371ms WaitForService to wait for kubelet
	I0929 13:10:46.954239 1430964 kubeadm.go:578] duration metric: took 4.655591833s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:10:46.954275 1430964 node_conditions.go:102] verifying NodePressure condition ...
	I0929 13:10:46.957433 1430964 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 13:10:46.957457 1430964 node_conditions.go:123] node cpu capacity is 8
	I0929 13:10:46.957468 1430964 node_conditions.go:105] duration metric: took 3.188601ms to run NodePressure ...
	I0929 13:10:46.957482 1430964 start.go:241] waiting for startup goroutines ...
	I0929 13:10:46.957491 1430964 start.go:246] waiting for cluster config update ...
	I0929 13:10:46.957507 1430964 start.go:255] writing updated cluster config ...
	I0929 13:10:46.957779 1430964 ssh_runner.go:195] Run: rm -f paused
	I0929 13:10:46.961696 1430964 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:10:46.965332 1430964 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6cxff" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 13:10:48.970466 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	I0929 13:10:50.103007 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:10:50.103057 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:10:50.103075 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:10:50.103091 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:10:50.103100 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:10:50.103107 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:10:50.103115 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:10:50.103122 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:10:50.103130 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:10:50.103135 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:10:50.103156 1359411 retry.go:31] will retry after 45.832047842s: missing components: kube-dns
	W0929 13:10:50.971068 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:10:52.971656 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:10:55.470753 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:10:57.970580 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:10:59.970813 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:11:01.970891 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:11:04.471085 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:11:06.971048 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:11:09.470881 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:11:11.471210 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:11:13.970282 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:11:15.971635 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:11:18.470862 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:11:20.471131 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	I0929 13:11:21.970955 1430964 pod_ready.go:94] pod "coredns-66bc5c9577-6cxff" is "Ready"
	I0929 13:11:21.971016 1430964 pod_ready.go:86] duration metric: took 35.005660476s for pod "coredns-66bc5c9577-6cxff" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:21.973497 1430964 pod_ready.go:83] waiting for pod "etcd-no-preload-554589" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:21.976953 1430964 pod_ready.go:94] pod "etcd-no-preload-554589" is "Ready"
	I0929 13:11:21.977006 1430964 pod_ready.go:86] duration metric: took 3.479297ms for pod "etcd-no-preload-554589" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:21.978873 1430964 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-554589" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:21.982431 1430964 pod_ready.go:94] pod "kube-apiserver-no-preload-554589" is "Ready"
	I0929 13:11:21.982453 1430964 pod_ready.go:86] duration metric: took 3.560274ms for pod "kube-apiserver-no-preload-554589" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:21.984284 1430964 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-554589" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:22.168713 1430964 pod_ready.go:94] pod "kube-controller-manager-no-preload-554589" is "Ready"
	I0929 13:11:22.168740 1430964 pod_ready.go:86] duration metric: took 184.436823ms for pod "kube-controller-manager-no-preload-554589" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:22.369803 1430964 pod_ready.go:83] waiting for pod "kube-proxy-8kkxk" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:22.769375 1430964 pod_ready.go:94] pod "kube-proxy-8kkxk" is "Ready"
	I0929 13:11:22.769412 1430964 pod_ready.go:86] duration metric: took 399.578121ms for pod "kube-proxy-8kkxk" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:22.969424 1430964 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-554589" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:23.369302 1430964 pod_ready.go:94] pod "kube-scheduler-no-preload-554589" is "Ready"
	I0929 13:11:23.369329 1430964 pod_ready.go:86] duration metric: took 399.880622ms for pod "kube-scheduler-no-preload-554589" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:23.369339 1430964 pod_ready.go:40] duration metric: took 36.407610233s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:11:23.415562 1430964 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 13:11:23.417576 1430964 out.go:179] * Done! kubectl is now configured to use "no-preload-554589" cluster and "default" namespace by default
	I0929 13:11:35.940622 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:11:35.940666 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:11:35.940679 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:11:35.940691 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:11:35.940697 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:11:35.940703 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:11:35.940709 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:11:35.940717 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:11:35.940726 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:11:35.940732 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:11:35.940756 1359411 retry.go:31] will retry after 45.593833894s: missing components: kube-dns
	I0929 13:12:21.540022 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:12:21.540068 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:12:21.540078 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:12:21.540088 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:12:21.540096 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:12:21.540102 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:12:21.540108 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:12:21.540112 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:12:21.540117 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:12:21.540120 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:12:21.540139 1359411 retry.go:31] will retry after 1m5.22199495s: missing components: kube-dns
	I0929 13:13:26.769357 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:13:26.769402 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:13:26.769415 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:13:26.769424 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:13:26.769428 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:13:26.769432 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:13:26.769438 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:13:26.769442 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:13:26.769446 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:13:26.769449 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:13:26.769467 1359411 retry.go:31] will retry after 1m13.959390534s: missing components: kube-dns
	I0929 13:14:40.733869 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:14:40.733915 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:14:40.733926 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:14:40.733934 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:14:40.733937 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:14:40.733942 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:14:40.733946 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:14:40.733951 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:14:40.733954 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:14:40.733958 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:14:40.734002 1359411 retry.go:31] will retry after 1m13.688928173s: missing components: kube-dns
	I0929 13:15:54.426567 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:15:54.426609 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:15:54.426619 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:15:54.426627 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:15:54.426631 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:15:54.426635 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:15:54.426639 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:15:54.426644 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:15:54.426647 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:15:54.426650 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:15:54.426671 1359411 retry.go:31] will retry after 53.851303252s: missing components: kube-dns
	I0929 13:16:48.282415 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:16:48.282459 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:16:48.282471 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:16:48.282478 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:16:48.282481 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:16:48.282486 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:16:48.282489 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:16:48.282493 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:16:48.282496 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:16:48.282499 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:16:48.282516 1359411 retry.go:31] will retry after 1m1.774490631s: missing components: kube-dns
	I0929 13:17:50.062266 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:17:50.062389 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:17:50.062405 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:17:50.062415 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:17:50.062422 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:17:50.062432 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:17:50.062438 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:17:50.062447 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:17:50.062453 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:17:50.062460 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:17:50.062481 1359411 retry.go:31] will retry after 1m8.677032828s: missing components: kube-dns
	I0929 13:18:58.743760 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:18:58.743806 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:18:58.743817 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:18:58.743825 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:18:58.743832 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:18:58.743840 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:18:58.743846 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:18:58.743853 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:18:58.743858 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:18:58.743862 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:18:58.743885 1359411 retry.go:31] will retry after 1m8.264714311s: missing components: kube-dns
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	0c05d2c89bc93       523cad1a4df73       3 minutes ago       Exited              dashboard-metrics-scraper   6                   3882b3dba6189       dashboard-metrics-scraper-6ffb444bf9-wfz7q
	89d5dd60e7ea9       6e38f40d628db       8 minutes ago       Running             storage-provisioner         2                   040df12ea072b       storage-provisioner
	1bd11e57da3f5       409467f978b4a       9 minutes ago       Running             kindnet-cni                 1                   4304b56f80f6a       kindnet-bmw79
	23057c561f919       52546a367cc9e       9 minutes ago       Running             coredns                     1                   bcc830925fbde       coredns-66bc5c9577-7ks4q
	5edfb59ce5bfd       56cc512116c8f       9 minutes ago       Running             busybox                     1                   5bcfcf5204b14       busybox
	41d326ca33119       6e38f40d628db       9 minutes ago       Exited              storage-provisioner         1                   040df12ea072b       storage-provisioner
	ad925575e8d3c       df0860106674d       9 minutes ago       Running             kube-proxy                  1                   188acd4aa524a       kube-proxy-lg9p7
	57d2a9624e102       90550c43ad2bc       9 minutes ago       Running             kube-apiserver              1                   8234959642546       kube-apiserver-embed-certs-644246
	2cf5ae260ab08       a0af72f2ec6d6       9 minutes ago       Running             kube-controller-manager     1                   b42b761142114       kube-controller-manager-embed-certs-644246
	2c5e8c1c3ba13       5f1f5298c888d       9 minutes ago       Running             etcd                        1                   8adfdcbd113ce       etcd-embed-certs-644246
	e7e3c0e682ae2       46169d968e920       9 minutes ago       Running             kube-scheduler              1                   328268537f830       kube-scheduler-embed-certs-644246
	c8e5525bd40e7       56cc512116c8f       10 minutes ago      Exited              busybox                     0                   bcabf760c3cba       busybox
	69d4c35cd6401       52546a367cc9e       10 minutes ago      Exited              coredns                     0                   cc957c77e1525       coredns-66bc5c9577-7ks4q
	1fc235f0560df       409467f978b4a       10 minutes ago      Exited              kindnet-cni                 0                   3fb409eba6f0d       kindnet-bmw79
	e6b423e6a31e5       df0860106674d       10 minutes ago      Exited              kube-proxy                  0                   66e86b84e404c       kube-proxy-lg9p7
	fae72d184a31a       5f1f5298c888d       10 minutes ago      Exited              etcd                        0                   ae2c618ac3054       etcd-embed-certs-644246
	57e6ad38567b4       a0af72f2ec6d6       10 minutes ago      Exited              kube-controller-manager     0                   ebd534628011d       kube-controller-manager-embed-certs-644246
	ee3dd7dd9a85b       46169d968e920       10 minutes ago      Exited              kube-scheduler              0                   5ef1b712437e4       kube-scheduler-embed-certs-644246
	b3e4f310cb1d2       90550c43ad2bc       10 minutes ago      Exited              kube-apiserver              0                   7289aadb8d395       kube-apiserver-embed-certs-644246
	
	
	==> containerd <==
	Sep 29 13:12:55 embed-certs-644246 containerd[476]: time="2025-09-29T13:12:55.363900236Z" level=info msg="received exit event container_id:\"816501253bf3c94203a02cf8f758d9e66d710f63c8f9da9d7890d2b210540eb7\"  id:\"816501253bf3c94203a02cf8f758d9e66d710f63c8f9da9d7890d2b210540eb7\"  pid:2677  exit_status:1  exited_at:{seconds:1759151575  nanos:363663445}"
	Sep 29 13:12:55 embed-certs-644246 containerd[476]: time="2025-09-29T13:12:55.384653448Z" level=info msg="shim disconnected" id=816501253bf3c94203a02cf8f758d9e66d710f63c8f9da9d7890d2b210540eb7 namespace=k8s.io
	Sep 29 13:12:55 embed-certs-644246 containerd[476]: time="2025-09-29T13:12:55.384696707Z" level=warning msg="cleaning up after shim disconnected" id=816501253bf3c94203a02cf8f758d9e66d710f63c8f9da9d7890d2b210540eb7 namespace=k8s.io
	Sep 29 13:12:55 embed-certs-644246 containerd[476]: time="2025-09-29T13:12:55.384706854Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 29 13:12:55 embed-certs-644246 containerd[476]: time="2025-09-29T13:12:55.861343507Z" level=info msg="RemoveContainer for \"a8d8f9dd74c41896475e9870a0eba31a6c5ec61fab7f7a1f369195208afbe087\""
	Sep 29 13:12:55 embed-certs-644246 containerd[476]: time="2025-09-29T13:12:55.865423469Z" level=info msg="RemoveContainer for \"a8d8f9dd74c41896475e9870a0eba31a6c5ec61fab7f7a1f369195208afbe087\" returns successfully"
	Sep 29 13:15:21 embed-certs-644246 containerd[476]: time="2025-09-29T13:15:21.276788432Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 29 13:15:21 embed-certs-644246 containerd[476]: time="2025-09-29T13:15:21.337393714Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host" host=fake.domain
	Sep 29 13:15:21 embed-certs-644246 containerd[476]: time="2025-09-29T13:15:21.338718228Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 29 13:15:21 embed-certs-644246 containerd[476]: time="2025-09-29T13:15:21.338772010Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 29 13:15:30 embed-certs-644246 containerd[476]: time="2025-09-29T13:15:30.276607796Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 29 13:15:30 embed-certs-644246 containerd[476]: time="2025-09-29T13:15:30.278257203Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 13:15:31 embed-certs-644246 containerd[476]: time="2025-09-29T13:15:31.077376361Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 13:15:32 embed-certs-644246 containerd[476]: time="2025-09-29T13:15:32.947451509Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 13:15:32 embed-certs-644246 containerd[476]: time="2025-09-29T13:15:32.947505461Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11014"
	Sep 29 13:15:47 embed-certs-644246 containerd[476]: time="2025-09-29T13:15:47.278273957Z" level=info msg="CreateContainer within sandbox \"3882b3dba6189bc071bb9a3af95578e663e4f7d6dbc7cd79e40db9fca4dc8f26\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,}"
	Sep 29 13:15:47 embed-certs-644246 containerd[476]: time="2025-09-29T13:15:47.288764783Z" level=info msg="CreateContainer within sandbox \"3882b3dba6189bc071bb9a3af95578e663e4f7d6dbc7cd79e40db9fca4dc8f26\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,} returns container id \"0c05d2c89bc93dc391bce73b7b40b3a1d2471d989dd103b211499e90fd9a0536\""
	Sep 29 13:15:47 embed-certs-644246 containerd[476]: time="2025-09-29T13:15:47.289489710Z" level=info msg="StartContainer for \"0c05d2c89bc93dc391bce73b7b40b3a1d2471d989dd103b211499e90fd9a0536\""
	Sep 29 13:15:47 embed-certs-644246 containerd[476]: time="2025-09-29T13:15:47.351741345Z" level=info msg="StartContainer for \"0c05d2c89bc93dc391bce73b7b40b3a1d2471d989dd103b211499e90fd9a0536\" returns successfully"
	Sep 29 13:15:47 embed-certs-644246 containerd[476]: time="2025-09-29T13:15:47.366749334Z" level=info msg="received exit event container_id:\"0c05d2c89bc93dc391bce73b7b40b3a1d2471d989dd103b211499e90fd9a0536\"  id:\"0c05d2c89bc93dc391bce73b7b40b3a1d2471d989dd103b211499e90fd9a0536\"  pid:2758  exit_status:1  exited_at:{seconds:1759151747  nanos:366346441}"
	Sep 29 13:15:47 embed-certs-644246 containerd[476]: time="2025-09-29T13:15:47.389346666Z" level=info msg="shim disconnected" id=0c05d2c89bc93dc391bce73b7b40b3a1d2471d989dd103b211499e90fd9a0536 namespace=k8s.io
	Sep 29 13:15:47 embed-certs-644246 containerd[476]: time="2025-09-29T13:15:47.389402727Z" level=warning msg="cleaning up after shim disconnected" id=0c05d2c89bc93dc391bce73b7b40b3a1d2471d989dd103b211499e90fd9a0536 namespace=k8s.io
	Sep 29 13:15:47 embed-certs-644246 containerd[476]: time="2025-09-29T13:15:47.389415041Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 29 13:15:48 embed-certs-644246 containerd[476]: time="2025-09-29T13:15:48.290059072Z" level=info msg="RemoveContainer for \"816501253bf3c94203a02cf8f758d9e66d710f63c8f9da9d7890d2b210540eb7\""
	Sep 29 13:15:48 embed-certs-644246 containerd[476]: time="2025-09-29T13:15:48.294268194Z" level=info msg="RemoveContainer for \"816501253bf3c94203a02cf8f758d9e66d710f63c8f9da9d7890d2b210540eb7\" returns successfully"
	
	
	==> coredns [23057c561f919b94945c5b05eb31b16450ca88a3241da82394cad4ee1da8c20a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35327 - 38432 "HINFO IN 4399475585571759701.8623236205832913163. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022894898s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [69d4c35cd64012132a0644040388e5fa4c203fc451def79d6cc0efd90d7ccd30] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36543 - 35168 "HINFO IN 957827576055723592.8006585472796840795. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.026866297s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               embed-certs-644246
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-644246
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=embed-certs-644246
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T13_08_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 13:08:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-644246
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 13:19:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 13:16:16 +0000   Mon, 29 Sep 2025 13:08:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 13:16:16 +0000   Mon, 29 Sep 2025 13:08:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 13:16:16 +0000   Mon, 29 Sep 2025 13:08:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 13:16:16 +0000   Mon, 29 Sep 2025 13:08:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-644246
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 1d0d4c109c85432e8db15932f17bb840
	  System UUID:                65c9402f-6de5-41ae-a2fe-c7db7f885c6a
	  Boot ID:                    c950b162-3ea4-4410-8c2e-1238f18b29b9
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-7ks4q                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-embed-certs-644246                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-bmw79                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-embed-certs-644246             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-embed-certs-644246    200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-lg9p7                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-embed-certs-644246             100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-746fcd58dc-mt8dc               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         9m59s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-wfz7q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m33s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-rfr6g         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 9m36s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node embed-certs-644246 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node embed-certs-644246 status is now: NodeHasSufficientMemory
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node embed-certs-644246 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node embed-certs-644246 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node embed-certs-644246 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node embed-certs-644246 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node embed-certs-644246 event: Registered Node embed-certs-644246 in Controller
	  Normal  Starting                 9m40s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m40s (x8 over 9m40s)  kubelet          Node embed-certs-644246 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m40s (x8 over 9m40s)  kubelet          Node embed-certs-644246 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m40s (x7 over 9m40s)  kubelet          Node embed-certs-644246 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m34s                  node-controller  Node embed-certs-644246 event: Registered Node embed-certs-644246 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 a1 f4 28 81 a8 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2e 2f bb 72 d0 bd 08 06
	[  +6.778142] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 83 71 a8 41 1d 08 06
	[  +0.096747] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 43 49 e5 fd fa 08 06
	[Sep29 13:07] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 2d 17 7b b6 88 08 06
	[  +0.000371] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5a 43 49 e5 fd fa 08 06
	[ +37.870699] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 36 61 5e 36 d0 11 08 06
	[Sep29 13:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 3c ea 5f b8 68 08 06
	[  +0.009082] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 a0 7d 1d f4 ea 08 06
	[ +10.861380] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 60 01 bb bd e5 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 61 5e 36 d0 11 08 06
	[ +36.402844] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 73 32 f4 f1 e6 08 06
	[  +0.000316] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 3c ea 5f b8 68 08 06
	
	
	==> etcd [2c5e8c1c3ba1329d5deb104ffe4a123648580a6355cce5276dd514d7d91b4f82] <==
	{"level":"warn","ts":"2025-09-29T13:09:47.418419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.427079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.437499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.447485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.455549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.463543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.472132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.480809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.489067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.497725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.505926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.514176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.524925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.530547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.538816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.546913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.570927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.578682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.586324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.597361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.605168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.612722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.665180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:49.820935Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"178.831879ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T13:09:49.821043Z","caller":"traceutil/trace.go:172","msg":"trace[272780333] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:557; }","duration":"178.963086ms","start":"2025-09-29T13:09:49.642069Z","end":"2025-09-29T13:09:49.821032Z","steps":["trace[272780333] 'range keys from in-memory index tree'  (duration: 178.751475ms)"],"step_count":1}
	
	
	==> etcd [fae72d184a31a651fed9071d2b9c7e5800c30e9d2e02171a24443d036ad0e6c3] <==
	{"level":"warn","ts":"2025-09-29T13:08:50.511694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.521081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.529337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.536384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.542787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.549574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.557398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.564507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.571583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.578767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.585307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.592220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.607055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.614655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.622740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.629787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.637851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.644435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.652831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.660173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.666618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.677017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.683895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.690728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.737606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36466","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:19:25 up  6:01,  0 users,  load average: 0.27, 0.77, 1.50
	Linux embed-certs-644246 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [1bd11e57da3f5af163b1ef12b74b2f556d41b6b0dd934ef71a46920d653881db] <==
	I0929 13:17:19.785077       1 main.go:301] handling current node
	I0929 13:17:29.777036       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:17:29.777076       1 main.go:301] handling current node
	I0929 13:17:39.778290       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:17:39.778322       1 main.go:301] handling current node
	I0929 13:17:49.776070       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:17:49.776100       1 main.go:301] handling current node
	I0929 13:17:59.776122       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:17:59.776178       1 main.go:301] handling current node
	I0929 13:18:09.777168       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:18:09.777218       1 main.go:301] handling current node
	I0929 13:18:19.776524       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:18:19.776554       1 main.go:301] handling current node
	I0929 13:18:29.777361       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:18:29.777412       1 main.go:301] handling current node
	I0929 13:18:39.777056       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:18:39.777088       1 main.go:301] handling current node
	I0929 13:18:49.785119       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:18:49.785159       1 main.go:301] handling current node
	I0929 13:18:59.776497       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:18:59.776563       1 main.go:301] handling current node
	I0929 13:19:09.776093       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:19:09.776144       1 main.go:301] handling current node
	I0929 13:19:19.785093       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:19:19.785121       1 main.go:301] handling current node
	
	
	==> kindnet [1fc235f0560df18bc1b28b2bc719561389fcc2648b4672c2df106c3f1e4ceea8] <==
	I0929 13:09:00.068714       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0929 13:09:00.068958       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I0929 13:09:00.069180       1 main.go:148] setting mtu 1500 for CNI 
	I0929 13:09:00.069198       1 main.go:178] kindnetd IP family: "ipv4"
	I0929 13:09:00.069221       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-29T13:09:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0929 13:09:00.290732       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0929 13:09:00.290757       1 controller.go:381] "Waiting for informer caches to sync"
	I0929 13:09:00.290769       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0929 13:09:00.291785       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0929 13:09:00.765003       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0929 13:09:00.765141       1 metrics.go:72] Registering metrics
	I0929 13:09:00.765564       1 controller.go:711] "Syncing nftables rules"
	I0929 13:09:10.295074       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:09:10.295153       1 main.go:301] handling current node
	I0929 13:09:20.291036       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:09:20.291067       1 main.go:301] handling current node
	
	
	==> kube-apiserver [57d2a9624e1027c381e850139a59371acba3ffe2995876f3c7ee92312c5ba2ec] <==
	I0929 13:14:59.476289       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:15:30.768635       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 13:15:49.263604       1 handler_proxy.go:99] no RequestInfo found in the context
	W0929 13:15:49.263623       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:15:49.263645       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 13:15:49.263660       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0929 13:15:49.263672       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 13:15:49.264748       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:16:11.883349       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:16:42.732098       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:17:26.657095       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 13:17:49.264143       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:17:49.264190       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 13:17:49.264205       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 13:17:49.265340       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:17:49.265442       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 13:17:49.265459       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:18:12.513890       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:18:41.659206       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-apiserver [b3e4f310cb1d2ceba76acb2c895b2c16bae6c0352d218f9dcdca0ec6dddeb40a] <==
	I0929 13:08:53.838497       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0929 13:08:53.845771       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0929 13:08:58.884559       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 13:08:58.889636       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 13:08:58.934886       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0929 13:08:59.083365       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E0929 13:09:25.791022       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:34838: use of closed network connection
	I0929 13:09:26.452688       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0929 13:09:26.456493       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:09:26.456563       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0929 13:09:26.456638       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0929 13:09:26.540051       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.103.15.144"}
	W0929 13:09:26.545498       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:09:26.545572       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0929 13:09:26.550721       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:09:26.550776       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-controller-manager [2cf5ae260ab086a7858422032a71c2b1a2118a4cb6b2821658fcfe01935eb793] <==
	I0929 13:13:21.662628       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:13:51.635467       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:13:51.668798       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:14:21.638843       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:14:21.675368       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:14:51.643395       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:14:51.682470       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:15:21.647119       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:15:21.688588       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:15:51.651701       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:15:51.695175       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:16:21.655255       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:16:21.701237       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:16:51.659682       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:16:51.708466       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:17:21.663401       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:17:21.714999       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:17:51.667555       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:17:51.722414       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:18:21.670976       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:18:21.729183       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:18:51.675413       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:18:51.736082       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:19:21.679245       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:19:21.741791       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-controller-manager [57e6ad38567b4e1b04e678e744e5821d6d7f8bec8cb60f6881032eb0f2c10fc7] <==
	I0929 13:08:58.081763       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 13:08:58.085232       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0929 13:08:58.088776       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 13:08:58.127694       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0929 13:08:58.129161       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 13:08:58.129197       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0929 13:08:58.129230       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 13:08:58.129233       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 13:08:58.129283       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 13:08:58.129291       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 13:08:58.129506       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 13:08:58.129556       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0929 13:08:58.129586       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 13:08:58.130168       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 13:08:58.130296       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 13:08:58.130696       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 13:08:58.130922       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 13:08:58.131281       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 13:08:58.131352       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 13:08:58.132112       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 13:08:58.136059       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 13:08:58.137115       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0929 13:08:58.140291       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 13:08:58.143578       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0929 13:08:58.160163       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [ad925575e8d3c94f93c2cd600707ce56562d811d8418d2a7d9d44de283de1431] <==
	I0929 13:09:49.066752       1 server_linux.go:53] "Using iptables proxy"
	I0929 13:09:49.140973       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 13:09:49.241255       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 13:09:49.241316       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E0929 13:09:49.241409       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 13:09:49.268207       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 13:09:49.268280       1 server_linux.go:132] "Using iptables Proxier"
	I0929 13:09:49.274996       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 13:09:49.275468       1 server.go:527] "Version info" version="v1.34.0"
	I0929 13:09:49.275502       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:09:49.277449       1 config.go:200] "Starting service config controller"
	I0929 13:09:49.277466       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 13:09:49.277512       1 config.go:106] "Starting endpoint slice config controller"
	I0929 13:09:49.277527       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 13:09:49.277540       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 13:09:49.277546       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 13:09:49.277792       1 config.go:309] "Starting node config controller"
	I0929 13:09:49.277821       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 13:09:49.377717       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 13:09:49.377734       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 13:09:49.377755       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 13:09:49.378568       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [e6b423e6a31e527deb5d8b921af6f92ea464c39c704ed95442081d8698e6a6cd] <==
	I0929 13:08:59.730309       1 server_linux.go:53] "Using iptables proxy"
	I0929 13:08:59.792652       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 13:08:59.893637       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 13:08:59.893676       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E0929 13:08:59.893810       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 13:08:59.918932       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 13:08:59.919017       1 server_linux.go:132] "Using iptables Proxier"
	I0929 13:08:59.925750       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 13:08:59.926444       1 server.go:527] "Version info" version="v1.34.0"
	I0929 13:08:59.926474       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:08:59.927955       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 13:08:59.928002       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 13:08:59.928031       1 config.go:200] "Starting service config controller"
	I0929 13:08:59.928036       1 config.go:106] "Starting endpoint slice config controller"
	I0929 13:08:59.928057       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 13:08:59.928058       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 13:08:59.928387       1 config.go:309] "Starting node config controller"
	I0929 13:08:59.928400       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 13:08:59.928407       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 13:09:00.028519       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 13:09:00.028545       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 13:09:00.028519       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [e7e3c0e682ae22da885e0dbd49d91495430c1f695431e13b4e68584eafd38663] <==
	I0929 13:09:47.253664       1 serving.go:386] Generated self-signed cert in-memory
	W0929 13:09:48.248354       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 13:09:48.248509       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 13:09:48.248572       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 13:09:48.248597       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 13:09:48.288071       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 13:09:48.289154       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:09:48.295350       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:09:48.295590       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:09:48.297174       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 13:09:48.297265       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 13:09:48.395924       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [ee3dd7dd9a85bce92920f319eb88479792baeceb3c535dfa8680b81695bd5ba9] <==
	E0929 13:08:51.145957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 13:08:51.145903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 13:08:51.145998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 13:08:51.146063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 13:08:51.146048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 13:08:51.146094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 13:08:51.146190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 13:08:51.146475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 13:08:51.146484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 13:08:51.146517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 13:08:51.146549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 13:08:51.146581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 13:08:51.146613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 13:08:51.146610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 13:08:51.146626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 13:08:52.035305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 13:08:52.053704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 13:08:52.099932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 13:08:52.126979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 13:08:52.194786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0929 13:08:52.198855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 13:08:52.276367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 13:08:52.286380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 13:08:52.300453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I0929 13:08:54.343609       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 13:18:10 embed-certs-644246 kubelet[609]: E0929 13:18:10.276515     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rfr6g" podUID="93751269-985a-4d2f-9768-407c72ae300b"
	Sep 29 13:18:15 embed-certs-644246 kubelet[609]: I0929 13:18:15.276531     609 scope.go:117] "RemoveContainer" containerID="0c05d2c89bc93dc391bce73b7b40b3a1d2471d989dd103b211499e90fd9a0536"
	Sep 29 13:18:15 embed-certs-644246 kubelet[609]: E0929 13:18:15.276729     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wfz7q_kubernetes-dashboard(eb275674-f63c-414d-965b-7b1134eeec43)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfz7q" podUID="eb275674-f63c-414d-965b-7b1134eeec43"
	Sep 29 13:18:15 embed-certs-644246 kubelet[609]: E0929 13:18:15.277429     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-mt8dc" podUID="5510c590-a7e8-4010-9032-6f9073db87f9"
	Sep 29 13:18:21 embed-certs-644246 kubelet[609]: E0929 13:18:21.276828     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rfr6g" podUID="93751269-985a-4d2f-9768-407c72ae300b"
	Sep 29 13:18:26 embed-certs-644246 kubelet[609]: E0929 13:18:26.276914     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-mt8dc" podUID="5510c590-a7e8-4010-9032-6f9073db87f9"
	Sep 29 13:18:30 embed-certs-644246 kubelet[609]: I0929 13:18:30.275904     609 scope.go:117] "RemoveContainer" containerID="0c05d2c89bc93dc391bce73b7b40b3a1d2471d989dd103b211499e90fd9a0536"
	Sep 29 13:18:30 embed-certs-644246 kubelet[609]: E0929 13:18:30.276088     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wfz7q_kubernetes-dashboard(eb275674-f63c-414d-965b-7b1134eeec43)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfz7q" podUID="eb275674-f63c-414d-965b-7b1134eeec43"
	Sep 29 13:18:33 embed-certs-644246 kubelet[609]: E0929 13:18:33.277180     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rfr6g" podUID="93751269-985a-4d2f-9768-407c72ae300b"
	Sep 29 13:18:41 embed-certs-644246 kubelet[609]: E0929 13:18:41.276845     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-mt8dc" podUID="5510c590-a7e8-4010-9032-6f9073db87f9"
	Sep 29 13:18:45 embed-certs-644246 kubelet[609]: I0929 13:18:45.279268     609 scope.go:117] "RemoveContainer" containerID="0c05d2c89bc93dc391bce73b7b40b3a1d2471d989dd103b211499e90fd9a0536"
	Sep 29 13:18:45 embed-certs-644246 kubelet[609]: E0929 13:18:45.279468     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wfz7q_kubernetes-dashboard(eb275674-f63c-414d-965b-7b1134eeec43)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfz7q" podUID="eb275674-f63c-414d-965b-7b1134eeec43"
	Sep 29 13:18:47 embed-certs-644246 kubelet[609]: E0929 13:18:47.277393     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rfr6g" podUID="93751269-985a-4d2f-9768-407c72ae300b"
	Sep 29 13:18:56 embed-certs-644246 kubelet[609]: I0929 13:18:56.275755     609 scope.go:117] "RemoveContainer" containerID="0c05d2c89bc93dc391bce73b7b40b3a1d2471d989dd103b211499e90fd9a0536"
	Sep 29 13:18:56 embed-certs-644246 kubelet[609]: E0929 13:18:56.275915     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wfz7q_kubernetes-dashboard(eb275674-f63c-414d-965b-7b1134eeec43)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfz7q" podUID="eb275674-f63c-414d-965b-7b1134eeec43"
	Sep 29 13:18:56 embed-certs-644246 kubelet[609]: E0929 13:18:56.276550     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-mt8dc" podUID="5510c590-a7e8-4010-9032-6f9073db87f9"
	Sep 29 13:18:58 embed-certs-644246 kubelet[609]: E0929 13:18:58.276340     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rfr6g" podUID="93751269-985a-4d2f-9768-407c72ae300b"
	Sep 29 13:19:07 embed-certs-644246 kubelet[609]: I0929 13:19:07.276616     609 scope.go:117] "RemoveContainer" containerID="0c05d2c89bc93dc391bce73b7b40b3a1d2471d989dd103b211499e90fd9a0536"
	Sep 29 13:19:07 embed-certs-644246 kubelet[609]: E0929 13:19:07.276904     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wfz7q_kubernetes-dashboard(eb275674-f63c-414d-965b-7b1134eeec43)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfz7q" podUID="eb275674-f63c-414d-965b-7b1134eeec43"
	Sep 29 13:19:09 embed-certs-644246 kubelet[609]: E0929 13:19:09.277196     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-mt8dc" podUID="5510c590-a7e8-4010-9032-6f9073db87f9"
	Sep 29 13:19:11 embed-certs-644246 kubelet[609]: E0929 13:19:11.277317     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rfr6g" podUID="93751269-985a-4d2f-9768-407c72ae300b"
	Sep 29 13:19:19 embed-certs-644246 kubelet[609]: I0929 13:19:19.275408     609 scope.go:117] "RemoveContainer" containerID="0c05d2c89bc93dc391bce73b7b40b3a1d2471d989dd103b211499e90fd9a0536"
	Sep 29 13:19:19 embed-certs-644246 kubelet[609]: E0929 13:19:19.275581     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wfz7q_kubernetes-dashboard(eb275674-f63c-414d-965b-7b1134eeec43)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfz7q" podUID="eb275674-f63c-414d-965b-7b1134eeec43"
	Sep 29 13:19:20 embed-certs-644246 kubelet[609]: E0929 13:19:20.276234     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-mt8dc" podUID="5510c590-a7e8-4010-9032-6f9073db87f9"
	Sep 29 13:19:23 embed-certs-644246 kubelet[609]: E0929 13:19:23.277302     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rfr6g" podUID="93751269-985a-4d2f-9768-407c72ae300b"
	
	
	==> storage-provisioner [41d326ca331192a3ff6005cb92d6ee67bbc962d23107ba44a96e3d44fed63d52] <==
	I0929 13:09:49.052178       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 13:10:19.055836       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [89d5dd60e7ea9b3d05940cb04376573f4d2e879ecf1a726ab40a5fdc0f6beb26] <==
	W0929 13:19:00.578361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:19:02.581408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:19:02.586458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:19:04.590036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:19:04.594336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:19:06.597851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:19:06.601740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:19:08.604985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:19:08.608985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:19:10.612418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:19:10.617054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:19:12.619519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:19:12.623473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:19:14.626453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:19:14.630150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:19:16.632651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:19:16.638024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:19:18.641115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:19:18.644810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:19:20.648136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:19:20.652508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:19:22.655600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:19:22.659670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:19:24.663136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:19:24.667640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-644246 -n embed-certs-644246
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-644246 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-mt8dc kubernetes-dashboard-855c9754f9-rfr6g
helpers_test.go:282: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context embed-certs-644246 describe pod metrics-server-746fcd58dc-mt8dc kubernetes-dashboard-855c9754f9-rfr6g
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-644246 describe pod metrics-server-746fcd58dc-mt8dc kubernetes-dashboard-855c9754f9-rfr6g: exit status 1 (58.618205ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-mt8dc" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-rfr6g" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context embed-certs-644246 describe pod metrics-server-746fcd58dc-mt8dc kubernetes-dashboard-855c9754f9-rfr6g: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-95jmk" [010dbb38-5dfe-41e9-a655-0c6d4115135a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0929 13:11:39.684090 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/kindnet-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:11:43.751922 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/custom-flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:11:43.758268 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/custom-flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:11:43.769644 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/custom-flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:11:43.790955 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/custom-flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:11:43.832356 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/custom-flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:11:43.914132 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/custom-flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:11:44.075631 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/custom-flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:11:44.397317 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/custom-flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:11:45.038871 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/custom-flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:11:46.320387 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/custom-flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:11:48.882003 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/custom-flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:11:50.830696 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/auto-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:11:54.004103 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/custom-flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:12:04.246194 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/custom-flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:12:08.286801 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/enable-default-cni-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:12:08.293216 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/enable-default-cni-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:12:08.304617 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/enable-default-cni-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:12:08.326036 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/enable-default-cni-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:12:08.367442 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/enable-default-cni-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:12:08.448895 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/enable-default-cni-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:12:08.610416 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/enable-default-cni-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:12:08.932499 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/enable-default-cni-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:12:09.574524 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/enable-default-cni-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:12:10.856419 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/enable-default-cni-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:12:13.417782 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/enable-default-cni-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:12:18.539115 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/enable-default-cni-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:12:20.645675 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/kindnet-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:12:24.728025 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/custom-flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:12:28.780428 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/enable-default-cni-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:12:49.262581 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/enable-default-cni-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:12:59.086188 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:12:59.092544 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:12:59.103905 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:12:59.125231 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:12:59.166703 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:12:59.248126 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:12:59.409661 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:12:59.731265 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:13:00.372747 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:13:01.654109 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:13:04.216223 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:13:05.689413 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/custom-flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:13:09.338099 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:13:12.751911 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/auto-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:13:19.580252 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:13:30.224118 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/enable-default-cni-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:13:40.062442 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:13:40.995730 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/bridge-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:13:41.002115 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/bridge-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:13:41.013435 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/bridge-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:13:41.034802 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/bridge-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:13:41.076122 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/bridge-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:13:41.157518 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/bridge-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:13:41.319011 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/bridge-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:13:41.640571 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/bridge-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:13:42.282205 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/bridge-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:13:42.567930 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/kindnet-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:13:43.564011 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/bridge-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:13:46.126318 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/bridge-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:13:51.248187 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/bridge-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:14:01.490192 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/bridge-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:14:21.024089 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:14:21.972313 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/bridge-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:14:27.611267 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/custom-flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:14:52.146685 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/enable-default-cni-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:15:02.934272 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/bridge-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:15:03.754830 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:15:20.684020 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:15:28.892347 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/auto-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:15:39.707698 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:15:42.945652 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:15:56.594214 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/auto-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:15:58.708584 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/kindnet-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:16:24.856491 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/bridge-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:16:26.409602 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/kindnet-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:16:43.751479 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/custom-flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:17:08.287340 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/enable-default-cni-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:17:11.453200 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/custom-flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:17:35.988453 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/enable-default-cni-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:17:59.086311 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:18:26.787789 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:18:40.995323 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/bridge-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:18:42.779033 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-554589 -n no-preload-554589
start_stop_delete_test.go:272: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-29 13:20:24.069839181 +0000 UTC m=+3788.653862370
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context no-preload-554589 describe po kubernetes-dashboard-855c9754f9-95jmk -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context no-preload-554589 describe po kubernetes-dashboard-855c9754f9-95jmk -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-95jmk
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             no-preload-554589/192.168.94.2
Start Time:       Mon, 29 Sep 2025 13:10:49 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-st8jd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-st8jd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m34s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-95jmk to no-preload-554589
Normal   Pulling    6m18s (x5 over 9m35s)   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     6m15s (x5 over 9m30s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m15s (x5 over 9m30s)   kubelet            Error: ErrImagePull
Warning  Failed     4m20s (x20 over 9m30s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m6s (x21 over 9m30s)   kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context no-preload-554589 logs kubernetes-dashboard-855c9754f9-95jmk -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context no-preload-554589 logs kubernetes-dashboard-855c9754f9-95jmk -n kubernetes-dashboard: exit status 1 (73.781189ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-95jmk" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context no-preload-554589 logs kubernetes-dashboard-855c9754f9-95jmk -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-554589
helpers_test.go:243: (dbg) docker inspect no-preload-554589:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d8190e936dbd53573e6799b0cec0b471da8c9ec199e5d43e68f52abd020fff94",
	        "Created": "2025-09-29T13:09:10.342280688Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1431160,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T13:10:36.121342788Z",
	            "FinishedAt": "2025-09-29T13:10:35.305865245Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/d8190e936dbd53573e6799b0cec0b471da8c9ec199e5d43e68f52abd020fff94/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d8190e936dbd53573e6799b0cec0b471da8c9ec199e5d43e68f52abd020fff94/hostname",
	        "HostsPath": "/var/lib/docker/containers/d8190e936dbd53573e6799b0cec0b471da8c9ec199e5d43e68f52abd020fff94/hosts",
	        "LogPath": "/var/lib/docker/containers/d8190e936dbd53573e6799b0cec0b471da8c9ec199e5d43e68f52abd020fff94/d8190e936dbd53573e6799b0cec0b471da8c9ec199e5d43e68f52abd020fff94-json.log",
	        "Name": "/no-preload-554589",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-554589:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-554589",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d8190e936dbd53573e6799b0cec0b471da8c9ec199e5d43e68f52abd020fff94",
	                "LowerDir": "/var/lib/docker/overlay2/aa80ce01923e8ae555d2846135b0f843a88454a81977d5d3d5ffc6a21942166c-init/diff:/var/lib/docker/overlay2/fbd0ff8837aea1062458ef3b6c2ff01f7caaf77470820d108a1f7ca188c98aa7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aa80ce01923e8ae555d2846135b0f843a88454a81977d5d3d5ffc6a21942166c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aa80ce01923e8ae555d2846135b0f843a88454a81977d5d3d5ffc6a21942166c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aa80ce01923e8ae555d2846135b0f843a88454a81977d5d3d5ffc6a21942166c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-554589",
	                "Source": "/var/lib/docker/volumes/no-preload-554589/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-554589",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-554589",
	                "name.minikube.sigs.k8s.io": "no-preload-554589",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "607f41e6a298d3adb3d27debd3320db6d26c567d901e2f3451e88d6743e1fd6b",
	            "SandboxKey": "/var/run/docker/netns/607f41e6a298",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33611"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33612"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33615"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33613"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33614"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-554589": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:00:17:05:95:a9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "263b2a799048a278e68c13f92637797019eb9f5749eb8ada73792b87bdd5d9d4",
	                    "EndpointID": "fc94ef3df882bab10e575a32b4c97a8593affc9019145e0498fb54cd7405a90b",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-554589",
	                        "d8190e936dbd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-554589 -n no-preload-554589
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-554589 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-554589 logs -n 25: (1.598252173s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ ssh     │ -p bridge-321209 sudo cri-dockerd --version                                                                                                                                                                                                         │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ ssh     │ -p bridge-321209 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ ssh     │ -p bridge-321209 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ ssh     │ -p bridge-321209 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ ssh     │ -p bridge-321209 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ ssh     │ -p bridge-321209 sudo containerd config dump                                                                                                                                                                                                        │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ ssh     │ -p bridge-321209 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │                     │
	│ ssh     │ -p bridge-321209 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ ssh     │ -p bridge-321209 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ ssh     │ -p bridge-321209 sudo crio config                                                                                                                                                                                                                   │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ delete  │ -p bridge-321209                                                                                                                                                                                                                                    │ bridge-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ delete  │ -p disable-driver-mounts-849793                                                                                                                                                                                                                     │ disable-driver-mounts-849793 │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-495121 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-495121       │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ start   │ -p no-preload-554589 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                       │ no-preload-554589            │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:10 UTC │
	│ stop    │ -p old-k8s-version-495121 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-495121       │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-495121 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-495121       │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ start   │ -p old-k8s-version-495121 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-495121       │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:10 UTC │
	│ addons  │ enable metrics-server -p embed-certs-644246 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-644246           │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ stop    │ -p embed-certs-644246 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-644246           │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ addons  │ enable dashboard -p embed-certs-644246 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-644246           │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:09 UTC │
	│ start   │ -p embed-certs-644246 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                        │ embed-certs-644246           │ jenkins │ v1.37.0 │ 29 Sep 25 13:09 UTC │ 29 Sep 25 13:10 UTC │
	│ addons  │ enable metrics-server -p no-preload-554589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-554589            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ stop    │ -p no-preload-554589 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-554589            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ addons  │ enable dashboard -p no-preload-554589 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-554589            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:10 UTC │
	│ start   │ -p no-preload-554589 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                                       │ no-preload-554589            │ jenkins │ v1.37.0 │ 29 Sep 25 13:10 UTC │ 29 Sep 25 13:11 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 13:10:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 13:10:35.887390 1430964 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:10:35.887528 1430964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:10:35.887538 1430964 out.go:374] Setting ErrFile to fd 2...
	I0929 13:10:35.887543 1430964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:10:35.887766 1430964 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1097891/.minikube/bin
	I0929 13:10:35.888286 1430964 out.go:368] Setting JSON to false
	I0929 13:10:35.889692 1430964 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":21173,"bootTime":1759130263,"procs":333,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 13:10:35.889804 1430964 start.go:140] virtualization: kvm guest
	I0929 13:10:35.892010 1430964 out.go:179] * [no-preload-554589] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 13:10:35.893293 1430964 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 13:10:35.893300 1430964 notify.go:220] Checking for updates...
	I0929 13:10:35.895737 1430964 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 13:10:35.896838 1430964 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 13:10:35.897902 1430964 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1097891/.minikube
	I0929 13:10:35.898915 1430964 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 13:10:35.899947 1430964 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 13:10:35.901594 1430964 config.go:182] Loaded profile config "no-preload-554589": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 13:10:35.902157 1430964 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 13:10:35.926890 1430964 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 13:10:35.926997 1430964 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:10:35.983850 1430964 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 13:10:35.973663238 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:10:35.983999 1430964 docker.go:318] overlay module found
	I0929 13:10:35.986231 1430964 out.go:179] * Using the docker driver based on existing profile
	I0929 13:10:35.987170 1430964 start.go:304] selected driver: docker
	I0929 13:10:35.987184 1430964 start.go:924] validating driver "docker" against &{Name:no-preload-554589 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-554589 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:10:35.987271 1430964 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 13:10:35.987858 1430964 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:10:36.048316 1430964 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 13:10:36.037327075 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:10:36.048601 1430964 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:10:36.048632 1430964 cni.go:84] Creating CNI manager for ""
	I0929 13:10:36.048678 1430964 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0929 13:10:36.048716 1430964 start.go:348] cluster config:
	{Name:no-preload-554589 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-554589 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:10:36.050290 1430964 out.go:179] * Starting "no-preload-554589" primary control-plane node in "no-preload-554589" cluster
	I0929 13:10:36.051338 1430964 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0929 13:10:36.052310 1430964 out.go:179] * Pulling base image v0.0.48 ...
	I0929 13:10:36.053168 1430964 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0929 13:10:36.053271 1430964 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 13:10:36.053310 1430964 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589/config.json ...
	I0929 13:10:36.053485 1430964 cache.go:107] acquiring lock: {Name:mk0a24f1bf5eff836d398ee592530f35f71c0ee4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:10:36.053482 1430964 cache.go:107] acquiring lock: {Name:mk71aec952ee722ffcd940a39d5e958f64a61352 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:10:36.053585 1430964 cache.go:107] acquiring lock: {Name:mk34c1dbc7ce4b55aef58920d74b57fccb4f6138 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:10:36.053579 1430964 cache.go:107] acquiring lock: {Name:mke82396d3d70feba1e14470b5460d60995ab461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:10:36.053623 1430964 cache.go:115] /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0929 13:10:36.053595 1430964 cache.go:107] acquiring lock: {Name:mkbf689face8cd4cbe1088f8d16d264b311f5a05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:10:36.053636 1430964 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 175.418µs
	I0929 13:10:36.053653 1430964 cache.go:115] /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I0929 13:10:36.053655 1430964 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0929 13:10:36.053662 1430964 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 80.179µs
	I0929 13:10:36.053671 1430964 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I0929 13:10:36.053681 1430964 cache.go:107] acquiring lock: {Name:mk3476c105048b10b0947812a968956108eab0e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:10:36.053739 1430964 cache.go:115] /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I0929 13:10:36.053734 1430964 cache.go:107] acquiring lock: {Name:mka7f06997e7f1d40489000070294d8bfac768af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:10:36.053755 1430964 cache.go:115] /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 exists
	I0929 13:10:36.053752 1430964 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 234.346µs
	I0929 13:10:36.053771 1430964 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I0929 13:10:36.053770 1430964 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.0" -> "/home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0" took 233.678µs
	I0929 13:10:36.053804 1430964 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.0 -> /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 succeeded
	I0929 13:10:36.053720 1430964 cache.go:115] /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 exists
	I0929 13:10:36.053827 1430964 cache.go:115] /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 exists
	I0929 13:10:36.053833 1430964 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.0" -> "/home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0" took 365.093µs
	I0929 13:10:36.053851 1430964 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.0 -> /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 succeeded
	I0929 13:10:36.053850 1430964 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.0" -> "/home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0" took 238.143µs
	I0929 13:10:36.053859 1430964 cache.go:115] /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I0929 13:10:36.053879 1430964 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 191.128µs
	I0929 13:10:36.053891 1430964 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I0929 13:10:36.053864 1430964 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.0 -> /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 succeeded
	I0929 13:10:36.053591 1430964 cache.go:107] acquiring lock: {Name:mk385a135f933810a76b1272dffaf4891eef10f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:10:36.054019 1430964 cache.go:115] /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 exists
	I0929 13:10:36.054027 1430964 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.0" -> "/home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0" took 443.615µs
	I0929 13:10:36.054035 1430964 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.0 -> /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 succeeded
	I0929 13:10:36.054043 1430964 cache.go:87] Successfully saved all images to host disk.
	I0929 13:10:36.075042 1430964 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 13:10:36.075061 1430964 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 13:10:36.075077 1430964 cache.go:232] Successfully downloaded all kic artifacts
	I0929 13:10:36.075108 1430964 start.go:360] acquireMachinesLock for no-preload-554589: {Name:mk5ff8f08413e283845bfb46ae253fb42cbb2a51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:10:36.075172 1430964 start.go:364] duration metric: took 44.583µs to acquireMachinesLock for "no-preload-554589"
	I0929 13:10:36.075206 1430964 start.go:96] Skipping create...Using existing machine configuration
	I0929 13:10:36.075218 1430964 fix.go:54] fixHost starting: 
	I0929 13:10:36.075468 1430964 cli_runner.go:164] Run: docker container inspect no-preload-554589 --format={{.State.Status}}
	I0929 13:10:36.094782 1430964 fix.go:112] recreateIfNeeded on no-preload-554589: state=Stopped err=<nil>
	W0929 13:10:36.094818 1430964 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 13:10:36.096594 1430964 out.go:252] * Restarting existing docker container for "no-preload-554589" ...
	I0929 13:10:36.096656 1430964 cli_runner.go:164] Run: docker start no-preload-554589
	I0929 13:10:36.348329 1430964 cli_runner.go:164] Run: docker container inspect no-preload-554589 --format={{.State.Status}}
	I0929 13:10:36.367780 1430964 kic.go:430] container "no-preload-554589" state is running.
	I0929 13:10:36.368218 1430964 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-554589
	I0929 13:10:36.387825 1430964 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589/config.json ...
	I0929 13:10:36.388091 1430964 machine.go:93] provisionDockerMachine start ...
	I0929 13:10:36.388191 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:36.407360 1430964 main.go:141] libmachine: Using SSH client type: native
	I0929 13:10:36.407692 1430964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33611 <nil> <nil>}
	I0929 13:10:36.407711 1430964 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 13:10:36.408408 1430964 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40280->127.0.0.1:33611: read: connection reset by peer
	I0929 13:10:39.547089 1430964 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-554589
	
	I0929 13:10:39.547121 1430964 ubuntu.go:182] provisioning hostname "no-preload-554589"
	I0929 13:10:39.547190 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:39.564551 1430964 main.go:141] libmachine: Using SSH client type: native
	I0929 13:10:39.564843 1430964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33611 <nil> <nil>}
	I0929 13:10:39.564862 1430964 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-554589 && echo "no-preload-554589" | sudo tee /etc/hostname
	I0929 13:10:39.715451 1430964 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-554589
	
	I0929 13:10:39.715532 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:39.733400 1430964 main.go:141] libmachine: Using SSH client type: native
	I0929 13:10:39.733671 1430964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33611 <nil> <nil>}
	I0929 13:10:39.733690 1430964 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-554589' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-554589/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-554589' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 13:10:39.872701 1430964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 13:10:39.872728 1430964 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-1097891/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-1097891/.minikube}
	I0929 13:10:39.872749 1430964 ubuntu.go:190] setting up certificates
	I0929 13:10:39.872759 1430964 provision.go:84] configureAuth start
	I0929 13:10:39.872813 1430964 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-554589
	I0929 13:10:39.891390 1430964 provision.go:143] copyHostCerts
	I0929 13:10:39.891464 1430964 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.pem, removing ...
	I0929 13:10:39.891484 1430964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.pem
	I0929 13:10:39.891561 1430964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.pem (1078 bytes)
	I0929 13:10:39.891693 1430964 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1097891/.minikube/cert.pem, removing ...
	I0929 13:10:39.891709 1430964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1097891/.minikube/cert.pem
	I0929 13:10:39.891752 1430964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-1097891/.minikube/cert.pem (1123 bytes)
	I0929 13:10:39.891910 1430964 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1097891/.minikube/key.pem, removing ...
	I0929 13:10:39.891923 1430964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1097891/.minikube/key.pem
	I0929 13:10:39.891972 1430964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-1097891/.minikube/key.pem (1679 bytes)
	I0929 13:10:39.892068 1430964 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca-key.pem org=jenkins.no-preload-554589 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-554589]
	I0929 13:10:39.939438 1430964 provision.go:177] copyRemoteCerts
	I0929 13:10:39.939504 1430964 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 13:10:39.939548 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:39.956799 1430964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33611 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/no-preload-554589/id_rsa Username:docker}
	I0929 13:10:40.055067 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 13:10:40.080134 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0929 13:10:40.104611 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 13:10:40.129350 1430964 provision.go:87] duration metric: took 256.573931ms to configureAuth
	I0929 13:10:40.129378 1430964 ubuntu.go:206] setting minikube options for container-runtime
	I0929 13:10:40.129599 1430964 config.go:182] Loaded profile config "no-preload-554589": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 13:10:40.129612 1430964 machine.go:96] duration metric: took 3.741506785s to provisionDockerMachine
	I0929 13:10:40.129622 1430964 start.go:293] postStartSetup for "no-preload-554589" (driver="docker")
	I0929 13:10:40.129637 1430964 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 13:10:40.129690 1430964 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 13:10:40.129756 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:40.147536 1430964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33611 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/no-preload-554589/id_rsa Username:docker}
	I0929 13:10:40.246335 1430964 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 13:10:40.249785 1430964 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 13:10:40.249812 1430964 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 13:10:40.249819 1430964 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 13:10:40.249826 1430964 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 13:10:40.249835 1430964 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1097891/.minikube/addons for local assets ...
	I0929 13:10:40.249880 1430964 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1097891/.minikube/files for local assets ...
	I0929 13:10:40.249948 1430964 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem -> 11014942.pem in /etc/ssl/certs
	I0929 13:10:40.250070 1430964 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 13:10:40.259126 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem --> /etc/ssl/certs/11014942.pem (1708 bytes)
	I0929 13:10:40.284860 1430964 start.go:296] duration metric: took 155.217314ms for postStartSetup
	I0929 13:10:40.284948 1430964 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 13:10:40.285044 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:40.302550 1430964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33611 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/no-preload-554589/id_rsa Username:docker}
	I0929 13:10:40.396065 1430964 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 13:10:40.400658 1430964 fix.go:56] duration metric: took 4.325395629s for fixHost
	I0929 13:10:40.400685 1430964 start.go:83] releasing machines lock for "no-preload-554589", held for 4.325500319s
	I0929 13:10:40.400745 1430964 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-554589
	I0929 13:10:40.419253 1430964 ssh_runner.go:195] Run: cat /version.json
	I0929 13:10:40.419302 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:40.419316 1430964 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 13:10:40.419372 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:40.437334 1430964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33611 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/no-preload-554589/id_rsa Username:docker}
	I0929 13:10:40.437565 1430964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33611 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/no-preload-554589/id_rsa Username:docker}
	I0929 13:10:40.530040 1430964 ssh_runner.go:195] Run: systemctl --version
	I0929 13:10:40.618702 1430964 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 13:10:40.623606 1430964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 13:10:40.643627 1430964 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 13:10:40.643704 1430964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 13:10:40.655028 1430964 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 13:10:40.655056 1430964 start.go:495] detecting cgroup driver to use...
	I0929 13:10:40.655090 1430964 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 13:10:40.655143 1430964 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0929 13:10:40.669887 1430964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 13:10:40.682685 1430964 docker.go:218] disabling cri-docker service (if available) ...
	I0929 13:10:40.682743 1430964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 13:10:40.697781 1430964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 13:10:40.710870 1430964 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 13:10:40.781641 1430964 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 13:10:40.850419 1430964 docker.go:234] disabling docker service ...
	I0929 13:10:40.850476 1430964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 13:10:40.864573 1430964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 13:10:40.877583 1430964 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 13:10:40.947404 1430964 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 13:10:41.013464 1430964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 13:10:41.025589 1430964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 13:10:41.043594 1430964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 13:10:41.054426 1430964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 13:10:41.064879 1430964 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0929 13:10:41.064945 1430964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0929 13:10:41.075614 1430964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 13:10:41.085902 1430964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 13:10:41.096231 1430964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 13:10:41.106375 1430964 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 13:10:41.116101 1430964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 13:10:41.126585 1430964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 13:10:41.136683 1430964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 13:10:41.147471 1430964 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 13:10:41.156376 1430964 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 13:10:41.164882 1430964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:10:41.232125 1430964 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 13:10:41.336741 1430964 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0929 13:10:41.336815 1430964 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0929 13:10:41.341097 1430964 start.go:563] Will wait 60s for crictl version
	I0929 13:10:41.341150 1430964 ssh_runner.go:195] Run: which crictl
	I0929 13:10:41.344984 1430964 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 13:10:41.381858 1430964 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0929 13:10:41.381934 1430964 ssh_runner.go:195] Run: containerd --version
	I0929 13:10:41.407752 1430964 ssh_runner.go:195] Run: containerd --version
	I0929 13:10:41.435044 1430964 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0929 13:10:41.436030 1430964 cli_runner.go:164] Run: docker network inspect no-preload-554589 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:10:41.453074 1430964 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0929 13:10:41.457289 1430964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:10:41.469647 1430964 kubeadm.go:875] updating cluster {Name:no-preload-554589 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-554589 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 13:10:41.469759 1430964 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0929 13:10:41.469801 1430964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 13:10:41.505893 1430964 containerd.go:627] all images are preloaded for containerd runtime.
	I0929 13:10:41.505917 1430964 cache_images.go:85] Images are preloaded, skipping loading
	I0929 13:10:41.505925 1430964 kubeadm.go:926] updating node { 192.168.94.2 8443 v1.34.0 containerd true true} ...
	I0929 13:10:41.506080 1430964 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-554589 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:no-preload-554589 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 13:10:41.506140 1430964 ssh_runner.go:195] Run: sudo crictl info
	I0929 13:10:41.542471 1430964 cni.go:84] Creating CNI manager for ""
	I0929 13:10:41.542493 1430964 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0929 13:10:41.542504 1430964 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 13:10:41.542530 1430964 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-554589 NodeName:no-preload-554589 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 13:10:41.542668 1430964 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-554589"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 13:10:41.542745 1430964 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 13:10:41.552925 1430964 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 13:10:41.553026 1430964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 13:10:41.562817 1430964 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0929 13:10:41.581742 1430964 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 13:10:41.600851 1430964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I0929 13:10:41.620107 1430964 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0929 13:10:41.623949 1430964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:10:41.636268 1430964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:10:41.709798 1430964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:10:41.732612 1430964 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589 for IP: 192.168.94.2
	I0929 13:10:41.732634 1430964 certs.go:194] generating shared ca certs ...
	I0929 13:10:41.732655 1430964 certs.go:226] acquiring lock for ca certs: {Name:mk80f04796163f71154dbe6468cabd937b3d9c9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:10:41.732829 1430964 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.key
	I0929 13:10:41.732882 1430964 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/proxy-client-ca.key
	I0929 13:10:41.732897 1430964 certs.go:256] generating profile certs ...
	I0929 13:10:41.733042 1430964 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589/client.key
	I0929 13:10:41.733119 1430964 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589/apiserver.key.98402d2c
	I0929 13:10:41.733170 1430964 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589/proxy-client.key
	I0929 13:10:41.733316 1430964 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/1101494.pem (1338 bytes)
	W0929 13:10:41.733355 1430964 certs.go:480] ignoring /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/1101494_empty.pem, impossibly tiny 0 bytes
	I0929 13:10:41.733367 1430964 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 13:10:41.733400 1430964 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem (1078 bytes)
	I0929 13:10:41.733427 1430964 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem (1123 bytes)
	I0929 13:10:41.733467 1430964 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/key.pem (1679 bytes)
	I0929 13:10:41.733519 1430964 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem (1708 bytes)
	I0929 13:10:41.734337 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 13:10:41.765009 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I0929 13:10:41.793504 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 13:10:41.827789 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 13:10:41.857035 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0929 13:10:41.884766 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 13:10:41.911756 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 13:10:41.941605 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 13:10:41.967516 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 13:10:41.992710 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/1101494.pem --> /usr/share/ca-certificates/1101494.pem (1338 bytes)
	I0929 13:10:42.018319 1430964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem --> /usr/share/ca-certificates/11014942.pem (1708 bytes)
	I0929 13:10:42.042856 1430964 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 13:10:42.060923 1430964 ssh_runner.go:195] Run: openssl version
	I0929 13:10:42.066444 1430964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 13:10:42.076065 1430964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:10:42.079599 1430964 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 12:18 /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:10:42.079650 1430964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:10:42.086452 1430964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 13:10:42.095408 1430964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1101494.pem && ln -fs /usr/share/ca-certificates/1101494.pem /etc/ssl/certs/1101494.pem"
	I0929 13:10:42.105262 1430964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1101494.pem
	I0929 13:10:42.108926 1430964 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 12:23 /usr/share/ca-certificates/1101494.pem
	I0929 13:10:42.108999 1430964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1101494.pem
	I0929 13:10:42.115656 1430964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1101494.pem /etc/ssl/certs/51391683.0"
	I0929 13:10:42.124799 1430964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11014942.pem && ln -fs /usr/share/ca-certificates/11014942.pem /etc/ssl/certs/11014942.pem"
	I0929 13:10:42.134401 1430964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11014942.pem
	I0929 13:10:42.137842 1430964 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 12:23 /usr/share/ca-certificates/11014942.pem
	I0929 13:10:42.137890 1430964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11014942.pem
	I0929 13:10:42.145059 1430964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11014942.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 13:10:42.154717 1430964 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 13:10:42.158651 1430964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 13:10:42.165748 1430964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 13:10:42.172341 1430964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 13:10:42.178784 1430964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 13:10:42.185439 1430964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 13:10:42.192086 1430964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 13:10:42.198506 1430964 kubeadm.go:392] StartCluster: {Name:no-preload-554589 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-554589 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:10:42.198617 1430964 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0929 13:10:42.198653 1430964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 13:10:42.235388 1430964 cri.go:89] found id: "3aa4e89ae916232c207fa3b1b9f357dad149bbb0d0a5d1cd2b42c27cad6374b2"
	I0929 13:10:42.235408 1430964 cri.go:89] found id: "21b59ec52c2f189cce4c1c71122fb539bab5404609e8d49bc9bc242623c98f2d"
	I0929 13:10:42.235417 1430964 cri.go:89] found id: "fe92b189cf883cbe93d9474127d870f453d75c020b22114de99123f9f623f3a1"
	I0929 13:10:42.235421 1430964 cri.go:89] found id: "86d180d3fafecd80e755e727e2f50ad02bd1ea0707d33e41b1e2c298740f82b2"
	I0929 13:10:42.235426 1430964 cri.go:89] found id: "f157b54ee5632361a5614f30127b6f5dfc89ff0daa05de53a9f5257c9ebec23a"
	I0929 13:10:42.235429 1430964 cri.go:89] found id: "8c5c1254cf9381b1212b778b0bea8cccf2cd1cd3a2b9653e31070bc574cbe9d7"
	I0929 13:10:42.235431 1430964 cri.go:89] found id: "448fabba6fe89ac66791993182ef471d034e865da39b82ac763c5f6f70777c96"
	I0929 13:10:42.235434 1430964 cri.go:89] found id: "3e59ee92e127e9ebe23e71830eaec1c6942debeff812ea825dca6bd1ca6af1b8"
	I0929 13:10:42.235436 1430964 cri.go:89] found id: ""
	I0929 13:10:42.235495 1430964 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0929 13:10:42.250871 1430964 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-29T13:10:42Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0929 13:10:42.250953 1430964 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 13:10:42.263482 1430964 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 13:10:42.263515 1430964 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 13:10:42.263568 1430964 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 13:10:42.276428 1430964 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 13:10:42.277682 1430964 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-554589" does not appear in /home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 13:10:42.278693 1430964 kubeconfig.go:62] /home/jenkins/minikube-integration/21652-1097891/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-554589" cluster setting kubeconfig missing "no-preload-554589" context setting]
	I0929 13:10:42.280300 1430964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/kubeconfig: {Name:mk343611c88fd6ad36810bb377f9a0ca463784db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:10:42.282772 1430964 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 13:10:42.295661 1430964 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.94.2
	I0929 13:10:42.295700 1430964 kubeadm.go:593] duration metric: took 32.178175ms to restartPrimaryControlPlane
	I0929 13:10:42.295712 1430964 kubeadm.go:394] duration metric: took 97.214108ms to StartCluster
	I0929 13:10:42.295732 1430964 settings.go:142] acquiring lock: {Name:mk967ab7b412f5ea13a8bdbc3d08e00d0ec4417f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:10:42.295792 1430964 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 13:10:42.298396 1430964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/kubeconfig: {Name:mk343611c88fd6ad36810bb377f9a0ca463784db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:10:42.298619 1430964 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0929 13:10:42.298702 1430964 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 13:10:42.298790 1430964 addons.go:69] Setting storage-provisioner=true in profile "no-preload-554589"
	I0929 13:10:42.298805 1430964 addons.go:238] Setting addon storage-provisioner=true in "no-preload-554589"
	W0929 13:10:42.298811 1430964 addons.go:247] addon storage-provisioner should already be in state true
	I0929 13:10:42.298837 1430964 host.go:66] Checking if "no-preload-554589" exists ...
	I0929 13:10:42.298829 1430964 addons.go:69] Setting default-storageclass=true in profile "no-preload-554589"
	I0929 13:10:42.298848 1430964 config.go:182] Loaded profile config "no-preload-554589": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 13:10:42.298857 1430964 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-554589"
	I0929 13:10:42.298870 1430964 addons.go:69] Setting dashboard=true in profile "no-preload-554589"
	I0929 13:10:42.298841 1430964 addons.go:69] Setting metrics-server=true in profile "no-preload-554589"
	I0929 13:10:42.298896 1430964 addons.go:238] Setting addon dashboard=true in "no-preload-554589"
	W0929 13:10:42.298906 1430964 addons.go:247] addon dashboard should already be in state true
	I0929 13:10:42.298908 1430964 addons.go:238] Setting addon metrics-server=true in "no-preload-554589"
	W0929 13:10:42.298917 1430964 addons.go:247] addon metrics-server should already be in state true
	I0929 13:10:42.298940 1430964 host.go:66] Checking if "no-preload-554589" exists ...
	I0929 13:10:42.298942 1430964 host.go:66] Checking if "no-preload-554589" exists ...
	I0929 13:10:42.299211 1430964 cli_runner.go:164] Run: docker container inspect no-preload-554589 --format={{.State.Status}}
	I0929 13:10:42.299337 1430964 cli_runner.go:164] Run: docker container inspect no-preload-554589 --format={{.State.Status}}
	I0929 13:10:42.299397 1430964 cli_runner.go:164] Run: docker container inspect no-preload-554589 --format={{.State.Status}}
	I0929 13:10:42.299410 1430964 cli_runner.go:164] Run: docker container inspect no-preload-554589 --format={{.State.Status}}
	I0929 13:10:42.301050 1430964 out.go:179] * Verifying Kubernetes components...
	I0929 13:10:42.305464 1430964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:10:42.327596 1430964 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 13:10:42.327632 1430964 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 13:10:42.329217 1430964 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 13:10:42.329249 1430964 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 13:10:42.329324 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:42.329326 1430964 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 13:10:42.329804 1430964 addons.go:238] Setting addon default-storageclass=true in "no-preload-554589"
	W0929 13:10:42.329826 1430964 addons.go:247] addon default-storageclass should already be in state true
	I0929 13:10:42.329858 1430964 host.go:66] Checking if "no-preload-554589" exists ...
	I0929 13:10:42.330382 1430964 cli_runner.go:164] Run: docker container inspect no-preload-554589 --format={{.State.Status}}
	I0929 13:10:42.330802 1430964 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 13:10:42.330820 1430964 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 13:10:42.330878 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:42.332253 1430964 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 13:10:42.333200 1430964 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:10:42.333216 1430964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 13:10:42.333276 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:42.358580 1430964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33611 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/no-preload-554589/id_rsa Username:docker}
	I0929 13:10:42.361394 1430964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33611 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/no-preload-554589/id_rsa Username:docker}
	I0929 13:10:42.365289 1430964 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 13:10:42.366065 1430964 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 13:10:42.366168 1430964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-554589
	I0929 13:10:42.369057 1430964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33611 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/no-preload-554589/id_rsa Username:docker}
	I0929 13:10:42.398458 1430964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33611 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/no-preload-554589/id_rsa Username:docker}
	I0929 13:10:42.465331 1430964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:10:42.485873 1430964 node_ready.go:35] waiting up to 6m0s for node "no-preload-554589" to be "Ready" ...
	I0929 13:10:42.502155 1430964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:10:42.502885 1430964 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 13:10:42.502905 1430964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 13:10:42.517069 1430964 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 13:10:42.517097 1430964 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 13:10:42.522944 1430964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 13:10:42.538079 1430964 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 13:10:42.538106 1430964 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 13:10:42.545200 1430964 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 13:10:42.545228 1430964 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 13:10:42.570649 1430964 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 13:10:42.570677 1430964 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 13:10:42.580495 1430964 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 13:10:42.580521 1430964 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 13:10:42.609253 1430964 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 13:10:42.609285 1430964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0929 13:10:42.609512 1430964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0929 13:10:42.611191 1430964 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 13:10:42.611286 1430964 retry.go:31] will retry after 216.136192ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 13:10:42.634003 1430964 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 13:10:42.634107 1430964 retry.go:31] will retry after 293.519359ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 13:10:42.643987 1430964 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 13:10:42.644016 1430964 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0929 13:10:42.674561 1430964 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 13:10:42.674595 1430964 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0929 13:10:42.702843 1430964 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 13:10:42.702873 1430964 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 13:10:42.728750 1430964 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 13:10:42.728781 1430964 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 13:10:42.753082 1430964 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:10:42.753106 1430964 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 13:10:42.772698 1430964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:10:42.827939 1430964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:10:42.928554 1430964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0929 13:10:44.343592 1430964 node_ready.go:49] node "no-preload-554589" is "Ready"
	I0929 13:10:44.343632 1430964 node_ready.go:38] duration metric: took 1.857723898s for node "no-preload-554589" to be "Ready" ...
	I0929 13:10:44.343652 1430964 api_server.go:52] waiting for apiserver process to appear ...
	I0929 13:10:44.343710 1430964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 13:10:44.905717 1430964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.296158013s)
	I0929 13:10:44.905761 1430964 addons.go:479] Verifying addon metrics-server=true in "no-preload-554589"
	I0929 13:10:44.905844 1430964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.133099697s)
	I0929 13:10:44.907337 1430964 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-554589 addons enable metrics-server
	
	I0929 13:10:44.924947 1430964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.096962675s)
	I0929 13:10:44.925023 1430964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.996439442s)
	I0929 13:10:44.925049 1430964 api_server.go:72] duration metric: took 2.626402587s to wait for apiserver process to appear ...
	I0929 13:10:44.925058 1430964 api_server.go:88] waiting for apiserver healthz status ...
	I0929 13:10:44.925078 1430964 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0929 13:10:44.931266 1430964 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:10:44.931296 1430964 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:10:44.932611 1430964 out.go:179] * Enabled addons: metrics-server, dashboard, storage-provisioner, default-storageclass
	I0929 13:10:44.935452 1430964 addons.go:514] duration metric: took 2.636765019s for enable addons: enabled=[metrics-server dashboard storage-provisioner default-storageclass]
	I0929 13:10:45.426011 1430964 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0929 13:10:45.431277 1430964 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:10:45.431304 1430964 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:10:45.925804 1430964 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0929 13:10:45.931188 1430964 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:10:45.931222 1430964 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:10:46.425589 1430964 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0929 13:10:46.429986 1430964 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:10:46.430025 1430964 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:10:46.925637 1430964 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0929 13:10:46.929914 1430964 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0929 13:10:46.931143 1430964 api_server.go:141] control plane version: v1.34.0
	I0929 13:10:46.931168 1430964 api_server.go:131] duration metric: took 2.006103154s to wait for apiserver health ...
	I0929 13:10:46.931177 1430964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 13:10:46.934948 1430964 system_pods.go:59] 9 kube-system pods found
	I0929 13:10:46.935007 1430964 system_pods.go:61] "coredns-66bc5c9577-6cxff" [0ec3329b-47fd-402f-b8ec-d482d1f9b3c7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:10:46.935024 1430964 system_pods.go:61] "etcd-no-preload-554589" [6ae6f226-f3f5-4916-86ac-241f71542eec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:10:46.935032 1430964 system_pods.go:61] "kindnet-5z49c" [b688a8a1-9c75-42a1-be5a-48aff9897101] Running
	I0929 13:10:46.935040 1430964 system_pods.go:61] "kube-apiserver-no-preload-554589" [461eeb18-0997-4f04-b2f2-bd4f93ae16bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:10:46.935048 1430964 system_pods.go:61] "kube-controller-manager-no-preload-554589" [0095f296-2792-42a7-a015-f92d570fe2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:10:46.935052 1430964 system_pods.go:61] "kube-proxy-8kkxk" [0e984503-4cab-4fcf-a1cb-1684d2247f43] Running
	I0929 13:10:46.935064 1430964 system_pods.go:61] "kube-scheduler-no-preload-554589" [e47072a4-1f75-434b-aa66-477204025b6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:10:46.935071 1430964 system_pods.go:61] "metrics-server-746fcd58dc-45phl" [638c53d3-4825-4387-bb3a-56dd0be70464] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 13:10:46.935075 1430964 system_pods.go:61] "storage-provisioner" [af1e37d9-c313-4db3-a626-81403cf9ad15] Running
	I0929 13:10:46.935084 1430964 system_pods.go:74] duration metric: took 3.897674ms to wait for pod list to return data ...
	I0929 13:10:46.935098 1430964 default_sa.go:34] waiting for default service account to be created ...
	I0929 13:10:46.937529 1430964 default_sa.go:45] found service account: "default"
	I0929 13:10:46.937550 1430964 default_sa.go:55] duration metric: took 2.442128ms for default service account to be created ...
	I0929 13:10:46.937558 1430964 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 13:10:46.940321 1430964 system_pods.go:86] 9 kube-system pods found
	I0929 13:10:46.940347 1430964 system_pods.go:89] "coredns-66bc5c9577-6cxff" [0ec3329b-47fd-402f-b8ec-d482d1f9b3c7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:10:46.940355 1430964 system_pods.go:89] "etcd-no-preload-554589" [6ae6f226-f3f5-4916-86ac-241f71542eec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:10:46.940361 1430964 system_pods.go:89] "kindnet-5z49c" [b688a8a1-9c75-42a1-be5a-48aff9897101] Running
	I0929 13:10:46.940368 1430964 system_pods.go:89] "kube-apiserver-no-preload-554589" [461eeb18-0997-4f04-b2f2-bd4f93ae16bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:10:46.940375 1430964 system_pods.go:89] "kube-controller-manager-no-preload-554589" [0095f296-2792-42a7-a015-f92d570fe2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:10:46.940388 1430964 system_pods.go:89] "kube-proxy-8kkxk" [0e984503-4cab-4fcf-a1cb-1684d2247f43] Running
	I0929 13:10:46.940399 1430964 system_pods.go:89] "kube-scheduler-no-preload-554589" [e47072a4-1f75-434b-aa66-477204025b6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:10:46.940412 1430964 system_pods.go:89] "metrics-server-746fcd58dc-45phl" [638c53d3-4825-4387-bb3a-56dd0be70464] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 13:10:46.940419 1430964 system_pods.go:89] "storage-provisioner" [af1e37d9-c313-4db3-a626-81403cf9ad15] Running
	I0929 13:10:46.940427 1430964 system_pods.go:126] duration metric: took 2.863046ms to wait for k8s-apps to be running ...
	I0929 13:10:46.940441 1430964 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 13:10:46.940488 1430964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 13:10:46.954207 1430964 system_svc.go:56] duration metric: took 13.760371ms WaitForService to wait for kubelet
	I0929 13:10:46.954239 1430964 kubeadm.go:578] duration metric: took 4.655591833s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:10:46.954275 1430964 node_conditions.go:102] verifying NodePressure condition ...
	I0929 13:10:46.957433 1430964 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 13:10:46.957457 1430964 node_conditions.go:123] node cpu capacity is 8
	I0929 13:10:46.957468 1430964 node_conditions.go:105] duration metric: took 3.188601ms to run NodePressure ...
	I0929 13:10:46.957482 1430964 start.go:241] waiting for startup goroutines ...
	I0929 13:10:46.957491 1430964 start.go:246] waiting for cluster config update ...
	I0929 13:10:46.957507 1430964 start.go:255] writing updated cluster config ...
	I0929 13:10:46.957779 1430964 ssh_runner.go:195] Run: rm -f paused
	I0929 13:10:46.961696 1430964 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:10:46.965332 1430964 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6cxff" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 13:10:48.970466 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	I0929 13:10:50.103007 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:10:50.103057 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:10:50.103075 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:10:50.103091 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:10:50.103100 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:10:50.103107 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:10:50.103115 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:10:50.103122 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:10:50.103130 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:10:50.103135 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:10:50.103156 1359411 retry.go:31] will retry after 45.832047842s: missing components: kube-dns
	W0929 13:10:50.971068 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:10:52.971656 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:10:55.470753 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:10:57.970580 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:10:59.970813 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:11:01.970891 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:11:04.471085 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:11:06.971048 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:11:09.470881 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:11:11.471210 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:11:13.970282 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:11:15.971635 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:11:18.470862 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	W0929 13:11:20.471131 1430964 pod_ready.go:104] pod "coredns-66bc5c9577-6cxff" is not "Ready", error: <nil>
	I0929 13:11:21.970955 1430964 pod_ready.go:94] pod "coredns-66bc5c9577-6cxff" is "Ready"
	I0929 13:11:21.971016 1430964 pod_ready.go:86] duration metric: took 35.005660476s for pod "coredns-66bc5c9577-6cxff" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:21.973497 1430964 pod_ready.go:83] waiting for pod "etcd-no-preload-554589" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:21.976953 1430964 pod_ready.go:94] pod "etcd-no-preload-554589" is "Ready"
	I0929 13:11:21.977006 1430964 pod_ready.go:86] duration metric: took 3.479297ms for pod "etcd-no-preload-554589" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:21.978873 1430964 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-554589" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:21.982431 1430964 pod_ready.go:94] pod "kube-apiserver-no-preload-554589" is "Ready"
	I0929 13:11:21.982453 1430964 pod_ready.go:86] duration metric: took 3.560274ms for pod "kube-apiserver-no-preload-554589" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:21.984284 1430964 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-554589" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:22.168713 1430964 pod_ready.go:94] pod "kube-controller-manager-no-preload-554589" is "Ready"
	I0929 13:11:22.168740 1430964 pod_ready.go:86] duration metric: took 184.436823ms for pod "kube-controller-manager-no-preload-554589" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:22.369803 1430964 pod_ready.go:83] waiting for pod "kube-proxy-8kkxk" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:22.769375 1430964 pod_ready.go:94] pod "kube-proxy-8kkxk" is "Ready"
	I0929 13:11:22.769412 1430964 pod_ready.go:86] duration metric: took 399.578121ms for pod "kube-proxy-8kkxk" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:22.969424 1430964 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-554589" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:23.369302 1430964 pod_ready.go:94] pod "kube-scheduler-no-preload-554589" is "Ready"
	I0929 13:11:23.369329 1430964 pod_ready.go:86] duration metric: took 399.880622ms for pod "kube-scheduler-no-preload-554589" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:11:23.369339 1430964 pod_ready.go:40] duration metric: took 36.407610233s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:11:23.415562 1430964 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 13:11:23.417576 1430964 out.go:179] * Done! kubectl is now configured to use "no-preload-554589" cluster and "default" namespace by default
	I0929 13:11:35.940622 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:11:35.940666 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:11:35.940679 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:11:35.940691 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:11:35.940697 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:11:35.940703 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:11:35.940709 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:11:35.940717 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:11:35.940726 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:11:35.940732 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:11:35.940756 1359411 retry.go:31] will retry after 45.593833894s: missing components: kube-dns
	I0929 13:12:21.540022 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:12:21.540068 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:12:21.540078 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:12:21.540088 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:12:21.540096 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:12:21.540102 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:12:21.540108 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:12:21.540112 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:12:21.540117 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:12:21.540120 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:12:21.540139 1359411 retry.go:31] will retry after 1m5.22199495s: missing components: kube-dns
	I0929 13:13:26.769357 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:13:26.769402 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:13:26.769415 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:13:26.769424 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:13:26.769428 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:13:26.769432 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:13:26.769438 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:13:26.769442 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:13:26.769446 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:13:26.769449 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:13:26.769467 1359411 retry.go:31] will retry after 1m13.959390534s: missing components: kube-dns
	I0929 13:14:40.733869 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:14:40.733915 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:14:40.733926 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:14:40.733934 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:14:40.733937 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:14:40.733942 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:14:40.733946 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:14:40.733951 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:14:40.733954 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:14:40.733958 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:14:40.734002 1359411 retry.go:31] will retry after 1m13.688928173s: missing components: kube-dns
	I0929 13:15:54.426567 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:15:54.426609 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:15:54.426619 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:15:54.426627 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:15:54.426631 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:15:54.426635 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:15:54.426639 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:15:54.426644 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:15:54.426647 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:15:54.426650 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:15:54.426671 1359411 retry.go:31] will retry after 53.851303252s: missing components: kube-dns
	I0929 13:16:48.282415 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:16:48.282459 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:16:48.282471 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:16:48.282478 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:16:48.282481 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:16:48.282486 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:16:48.282489 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:16:48.282493 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:16:48.282496 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:16:48.282499 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:16:48.282516 1359411 retry.go:31] will retry after 1m1.774490631s: missing components: kube-dns
	I0929 13:17:50.062266 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:17:50.062389 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:17:50.062405 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:17:50.062415 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:17:50.062422 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:17:50.062432 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:17:50.062438 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:17:50.062447 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:17:50.062453 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:17:50.062460 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:17:50.062481 1359411 retry.go:31] will retry after 1m8.677032828s: missing components: kube-dns
	I0929 13:18:58.743760 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:18:58.743806 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:18:58.743817 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:18:58.743825 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:18:58.743832 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:18:58.743840 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:18:58.743846 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:18:58.743853 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:18:58.743858 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:18:58.743862 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:18:58.743885 1359411 retry.go:31] will retry after 1m8.264714311s: missing components: kube-dns
	I0929 13:20:07.016096 1359411 system_pods.go:86] 9 kube-system pods found
	I0929 13:20:07.016139 1359411 system_pods.go:89] "calico-kube-controllers-59556d9b4c-v89wt" [6ea53317-40d2-4211-ae22-26aece605076] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 13:20:07.016154 1359411 system_pods.go:89] "calico-node-dvd95" [b38e7822-2a00-4382-b48a-7e16f27310d4] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 13:20:07.016163 1359411 system_pods.go:89] "coredns-66bc5c9577-ntxf6" [99cb013d-40cf-4162-9393-2cbb60aba857] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:20:07.016166 1359411 system_pods.go:89] "etcd-calico-321209" [2caba753-62a5-4a3a-a527-5488cd54e12e] Running
	I0929 13:20:07.016171 1359411 system_pods.go:89] "kube-apiserver-calico-321209" [c4a74425-b2da-45fd-9c71-a723770a2218] Running
	I0929 13:20:07.016174 1359411 system_pods.go:89] "kube-controller-manager-calico-321209" [9108d36f-7000-4dc3-89d3-f87fea6c7997] Running
	I0929 13:20:07.016180 1359411 system_pods.go:89] "kube-proxy-t48k6" [192bfb29-9d30-4367-a3ae-644697743f3f] Running
	I0929 13:20:07.016183 1359411 system_pods.go:89] "kube-scheduler-calico-321209" [940f64e9-3043-46d1-9f48-2395263634b6] Running
	I0929 13:20:07.016186 1359411 system_pods.go:89] "storage-provisioner" [5230ad8e-2b88-4873-9967-35ee23fc64a6] Running
	I0929 13:20:07.016207 1359411 retry.go:31] will retry after 1m6.454135724s: missing components: kube-dns
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	25c7849b49de4       523cad1a4df73       3 minutes ago       Exited              dashboard-metrics-scraper   6                   7eda4e900a673       dashboard-metrics-scraper-6ffb444bf9-mrrdc
	fd251c5d0e784       6e38f40d628db       8 minutes ago       Running             storage-provisioner         3                   dfa171a4ed934       storage-provisioner
	e580b0127dfd6       409467f978b4a       9 minutes ago       Running             kindnet-cni                 1                   05322e95cb718       kindnet-5z49c
	ca01387f12644       56cc512116c8f       9 minutes ago       Running             busybox                     1                   3bf258c532575       busybox
	1d979b14b26ab       52546a367cc9e       9 minutes ago       Running             coredns                     1                   1af1129a1f8b5       coredns-66bc5c9577-6cxff
	83ac731714bb3       6e38f40d628db       9 minutes ago       Exited              storage-provisioner         2                   dfa171a4ed934       storage-provisioner
	5e80633031c5e       df0860106674d       9 minutes ago       Running             kube-proxy                  1                   ee217a6172795       kube-proxy-8kkxk
	68115be5aa4be       46169d968e920       9 minutes ago       Running             kube-scheduler              1                   94b133b22fb5f       kube-scheduler-no-preload-554589
	c35a9627a37f9       5f1f5298c888d       9 minutes ago       Running             etcd                        1                   8ade610893ad6       etcd-no-preload-554589
	b47e2d7f81b4b       a0af72f2ec6d6       9 minutes ago       Running             kube-controller-manager     1                   905d7f020f204       kube-controller-manager-no-preload-554589
	447139fa2a837       90550c43ad2bc       9 minutes ago       Running             kube-apiserver              1                   5666fe097a514       kube-apiserver-no-preload-554589
	795ed730b0d90       56cc512116c8f       10 minutes ago      Exited              busybox                     0                   22d7bc602f236       busybox
	21b59ec52c2f1       52546a367cc9e       10 minutes ago      Exited              coredns                     0                   a7fad0754c7c0       coredns-66bc5c9577-6cxff
	fe92b189cf883       409467f978b4a       10 minutes ago      Exited              kindnet-cni                 0                   9585f7d801570       kindnet-5z49c
	86d180d3fafec       df0860106674d       10 minutes ago      Exited              kube-proxy                  0                   3341dd5c83a4d       kube-proxy-8kkxk
	f157b54ee5632       46169d968e920       10 minutes ago      Exited              kube-scheduler              0                   e9e02814c0a69       kube-scheduler-no-preload-554589
	8c5c1254cf938       5f1f5298c888d       10 minutes ago      Exited              etcd                        0                   68c46b90c3df3       etcd-no-preload-554589
	448fabba6fe89       a0af72f2ec6d6       10 minutes ago      Exited              kube-controller-manager     0                   315586abc9b62       kube-controller-manager-no-preload-554589
	3e59ee92e127e       90550c43ad2bc       10 minutes ago      Exited              kube-apiserver              0                   e29b1bd7bfd94       kube-apiserver-no-preload-554589
	
	
	==> containerd <==
	Sep 29 13:13:46 no-preload-554589 containerd[480]: time="2025-09-29T13:13:46.350674913Z" level=info msg="RemoveContainer for \"e47eb42212baa9b25cff6df34d1a08f70f5d1be34875ba2dafec93e697aac5e8\" returns successfully"
	Sep 29 13:14:06 no-preload-554589 containerd[480]: time="2025-09-29T13:14:06.825511146Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 29 13:14:06 no-preload-554589 containerd[480]: time="2025-09-29T13:14:06.827126586Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 13:14:07 no-preload-554589 containerd[480]: time="2025-09-29T13:14:07.502329288Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 13:14:09 no-preload-554589 containerd[480]: time="2025-09-29T13:14:09.367724818Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 13:14:09 no-preload-554589 containerd[480]: time="2025-09-29T13:14:09.367799146Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Sep 29 13:16:27 no-preload-554589 containerd[480]: time="2025-09-29T13:16:27.828681098Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 29 13:16:27 no-preload-554589 containerd[480]: time="2025-09-29T13:16:27.873585655Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host" host=fake.domain
	Sep 29 13:16:27 no-preload-554589 containerd[480]: time="2025-09-29T13:16:27.875022166Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
	Sep 29 13:16:27 no-preload-554589 containerd[480]: time="2025-09-29T13:16:27.875070908Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 29 13:16:38 no-preload-554589 containerd[480]: time="2025-09-29T13:16:38.827279304Z" level=info msg="CreateContainer within sandbox \"7eda4e900a6738355385f1d1a2c9499fa50e19615570e3d9463f03a9fdb78adc\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,}"
	Sep 29 13:16:38 no-preload-554589 containerd[480]: time="2025-09-29T13:16:38.839990421Z" level=info msg="CreateContainer within sandbox \"7eda4e900a6738355385f1d1a2c9499fa50e19615570e3d9463f03a9fdb78adc\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,} returns container id \"25c7849b49de4fcce1432876200168fe14f9dd85fba555879aead2e742a23e49\""
	Sep 29 13:16:38 no-preload-554589 containerd[480]: time="2025-09-29T13:16:38.840663362Z" level=info msg="StartContainer for \"25c7849b49de4fcce1432876200168fe14f9dd85fba555879aead2e742a23e49\""
	Sep 29 13:16:38 no-preload-554589 containerd[480]: time="2025-09-29T13:16:38.901052263Z" level=info msg="StartContainer for \"25c7849b49de4fcce1432876200168fe14f9dd85fba555879aead2e742a23e49\" returns successfully"
	Sep 29 13:16:38 no-preload-554589 containerd[480]: time="2025-09-29T13:16:38.915162347Z" level=info msg="received exit event container_id:\"25c7849b49de4fcce1432876200168fe14f9dd85fba555879aead2e742a23e49\"  id:\"25c7849b49de4fcce1432876200168fe14f9dd85fba555879aead2e742a23e49\"  pid:2752  exit_status:1  exited_at:{seconds:1759151798  nanos:914863716}"
	Sep 29 13:16:38 no-preload-554589 containerd[480]: time="2025-09-29T13:16:38.940912094Z" level=info msg="shim disconnected" id=25c7849b49de4fcce1432876200168fe14f9dd85fba555879aead2e742a23e49 namespace=k8s.io
	Sep 29 13:16:38 no-preload-554589 containerd[480]: time="2025-09-29T13:16:38.940948866Z" level=warning msg="cleaning up after shim disconnected" id=25c7849b49de4fcce1432876200168fe14f9dd85fba555879aead2e742a23e49 namespace=k8s.io
	Sep 29 13:16:38 no-preload-554589 containerd[480]: time="2025-09-29T13:16:38.940975744Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 29 13:16:39 no-preload-554589 containerd[480]: time="2025-09-29T13:16:39.766205211Z" level=info msg="RemoveContainer for \"eac0540f1e6361d0c71c58baa8175cbe14e65f9ef56ff89c73da450ff8b2976d\""
	Sep 29 13:16:39 no-preload-554589 containerd[480]: time="2025-09-29T13:16:39.770752070Z" level=info msg="RemoveContainer for \"eac0540f1e6361d0c71c58baa8175cbe14e65f9ef56ff89c73da450ff8b2976d\" returns successfully"
	Sep 29 13:16:55 no-preload-554589 containerd[480]: time="2025-09-29T13:16:55.825338801Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 29 13:16:55 no-preload-554589 containerd[480]: time="2025-09-29T13:16:55.827049473Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 13:16:56 no-preload-554589 containerd[480]: time="2025-09-29T13:16:56.501284464Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 13:16:58 no-preload-554589 containerd[480]: time="2025-09-29T13:16:58.372017392Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 13:16:58 no-preload-554589 containerd[480]: time="2025-09-29T13:16:58.372066403Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	
	
	==> coredns [1d979b14b26abc6c476dc5cfb879e053dbb7e9fdc8eadd3eb4baf4296757d319] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41598 - 16224 "HINFO IN 3872912427259326573.3361768498413974895. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.198772148s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [21b59ec52c2f189cce4c1c71122fb539bab5404609e8d49bc9bc242623c98f2d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51079 - 40449 "HINFO IN 6588487073909697858.1471815918097734234. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.024067209s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               no-preload-554589
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-554589
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=no-preload-554589
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T13_09_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 13:09:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-554589
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 13:20:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 13:19:34 +0000   Mon, 29 Sep 2025 13:09:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 13:19:34 +0000   Mon, 29 Sep 2025 13:09:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 13:19:34 +0000   Mon, 29 Sep 2025 13:09:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 13:19:34 +0000   Mon, 29 Sep 2025 13:09:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-554589
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 7b35982ad03e4f88a9e11d6c8d99da9b
	  System UUID:                1ad1c296-1dd1-4a66-b956-4731b0e0e480
	  Boot ID:                    c950b162-3ea4-4410-8c2e-1238f18b29b9
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-6cxff                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-no-preload-554589                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-5z49c                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-no-preload-554589              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-no-preload-554589     200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-8kkxk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-no-preload-554589              100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-746fcd58dc-45phl               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         10m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-mrrdc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m36s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-95jmk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 9m39s                  kube-proxy       
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node no-preload-554589 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node no-preload-554589 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node no-preload-554589 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node no-preload-554589 event: Registered Node no-preload-554589 in Controller
	  Normal  Starting                 9m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m44s (x8 over 9m44s)  kubelet          Node no-preload-554589 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m44s (x8 over 9m44s)  kubelet          Node no-preload-554589 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m44s (x7 over 9m44s)  kubelet          Node no-preload-554589 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m37s                  node-controller  Node no-preload-554589 event: Registered Node no-preload-554589 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 a1 f4 28 81 a8 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2e 2f bb 72 d0 bd 08 06
	[  +6.778142] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 83 71 a8 41 1d 08 06
	[  +0.096747] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 43 49 e5 fd fa 08 06
	[Sep29 13:07] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 2d 17 7b b6 88 08 06
	[  +0.000371] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5a 43 49 e5 fd fa 08 06
	[ +37.870699] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 36 61 5e 36 d0 11 08 06
	[Sep29 13:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 3c ea 5f b8 68 08 06
	[  +0.009082] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 a0 7d 1d f4 ea 08 06
	[ +10.861380] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 60 01 bb bd e5 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 61 5e 36 d0 11 08 06
	[ +36.402844] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 73 32 f4 f1 e6 08 06
	[  +0.000316] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 3c ea 5f b8 68 08 06
	
	
	==> etcd [8c5c1254cf9381b1212b778b0bea8cccf2cd1cd3a2b9653e31070bc574cbe9d7] <==
	{"level":"warn","ts":"2025-09-29T13:09:33.261567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.269148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.278543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.293045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.302323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.308923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.316706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.325460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.331559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.347214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.363352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.375364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.383421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.390057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.397353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.405531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.412681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.419279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.428656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.437069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.445539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.452279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.475255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.484251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.579479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50938","server-name":"","error":"EOF"}
	
	
	==> etcd [c35a9627a37f91792211628d74fed1b99de1950706ae41ac8fdd805c688534e4] <==
	{"level":"warn","ts":"2025-09-29T13:10:43.710653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.719610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.725933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.732326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.739209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.745643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.752506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.758077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.764539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.771143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.777414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.783577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.791097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.797789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.806887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.810529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.816678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.822751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.829721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.836447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.848155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.851412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.858374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.874329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.913031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46080","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:20:25 up  6:02,  0 users,  load average: 0.53, 0.74, 1.44
	Linux no-preload-554589 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [e580b0127dfd6baba394a2d349cfec396ad4c0daff157eb4476e0a101957aff9] <==
	I0929 13:18:16.182076       1 main.go:301] handling current node
	I0929 13:18:26.180057       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:18:26.180086       1 main.go:301] handling current node
	I0929 13:18:36.178029       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:18:36.178059       1 main.go:301] handling current node
	I0929 13:18:46.180082       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:18:46.180137       1 main.go:301] handling current node
	I0929 13:18:56.184654       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:18:56.184689       1 main.go:301] handling current node
	I0929 13:19:06.179410       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:19:06.179440       1 main.go:301] handling current node
	I0929 13:19:16.178857       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:19:16.178895       1 main.go:301] handling current node
	I0929 13:19:26.184387       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:19:26.184423       1 main.go:301] handling current node
	I0929 13:19:36.178101       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:19:36.178130       1 main.go:301] handling current node
	I0929 13:19:46.180249       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:19:46.180305       1 main.go:301] handling current node
	I0929 13:19:56.186740       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:19:56.186772       1 main.go:301] handling current node
	I0929 13:20:06.186048       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:20:06.186084       1 main.go:301] handling current node
	I0929 13:20:16.180059       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:20:16.180088       1 main.go:301] handling current node
	
	
	==> kindnet [fe92b189cf883cbe93d9474127d870f453d75c020b22114de99123f9f623f3a1] <==
	I0929 13:09:46.686685       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0929 13:09:46.687042       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I0929 13:09:46.687232       1 main.go:148] setting mtu 1500 for CNI 
	I0929 13:09:46.687257       1 main.go:178] kindnetd IP family: "ipv4"
	I0929 13:09:46.687290       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-29T13:09:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0929 13:09:46.914300       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0929 13:09:46.914395       1 controller.go:381] "Waiting for informer caches to sync"
	I0929 13:09:46.914408       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0929 13:09:46.914610       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0929 13:09:47.385272       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0929 13:09:47.385308       1 metrics.go:72] Registering metrics
	I0929 13:09:47.385386       1 controller.go:711] "Syncing nftables rules"
	I0929 13:09:56.921074       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:09:56.921130       1 main.go:301] handling current node
	I0929 13:10:06.915171       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:10:06.915233       1 main.go:301] handling current node
	I0929 13:10:16.914755       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:10:16.914784       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3e59ee92e127e9ebe23e71830eaec1c6942debeff812ea825dca6bd1ca6af1b8] <==
	I0929 13:09:36.929792       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0929 13:09:40.730637       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 13:09:40.733931       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 13:09:41.427817       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0929 13:09:41.627484       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E0929 13:10:22.839275       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:42886: use of closed network connection
	I0929 13:10:23.547362       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0929 13:10:23.551941       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:10:23.552021       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0929 13:10:23.552077       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0929 13:10:23.625958       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.106.169.232"}
	W0929 13:10:23.636055       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:10:23.636115       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0929 13:10:23.639080       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	W0929 13:10:23.643274       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:10:23.643354       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-apiserver [447139fa2a8378fdb2b0fca317fd4afff7300bf48e07be8ab8398df6eb02b3c9] <==
	W0929 13:16:45.332314       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:16:45.332367       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 13:16:45.332385       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 13:16:45.333398       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:16:45.333457       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 13:16:45.333489       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:16:48.517217       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:17:50.256781       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:18:17.625430       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 13:18:45.333213       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:18:45.333270       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 13:18:45.333285       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 13:18:45.334307       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:18:45.334395       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 13:18:45.334415       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:18:55.756272       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:19:19.945014       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:20:20.329939       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:20:20.561260       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [448fabba6fe89ac66791993182ef471d034e865da39b82ac763c5f6f70777c96] <==
	I0929 13:09:40.725016       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 13:09:40.725028       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 13:09:40.725001       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0929 13:09:40.725365       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0929 13:09:40.726074       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 13:09:40.726083       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 13:09:40.726107       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0929 13:09:40.726364       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 13:09:40.726474       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 13:09:40.727016       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0929 13:09:40.727037       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 13:09:40.727057       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 13:09:40.727081       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 13:09:40.728475       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 13:09:40.728479       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 13:09:40.728508       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 13:09:40.729640       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0929 13:09:40.730530       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0929 13:09:40.730600       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0929 13:09:40.730652       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0929 13:09:40.730659       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0929 13:09:40.730666       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0929 13:09:40.733918       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 13:09:40.736172       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-554589" podCIDRs=["10.244.0.0/24"]
	I0929 13:09:40.746491       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [b47e2d7f81b4b23dbebd2a39fe3aa25f3a12719fdc16d113d1eafbbff29cc7d8] <==
	I0929 13:14:18.825066       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:14:48.793221       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:14:48.831941       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:15:18.797565       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:15:18.839573       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:15:48.802768       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:15:48.847497       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:16:18.807174       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:16:18.854217       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:16:48.811713       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:16:48.862026       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:17:18.816023       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:17:18.868566       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:17:48.821168       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:17:48.875801       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:18:18.825801       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:18:18.883507       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:18:48.830436       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:18:48.890748       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:19:18.834652       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:19:18.897308       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:19:48.840135       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:19:48.904599       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:20:18.845273       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:20:18.911405       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [5e80633031c5e83ef8f89d63ef7c4799609c4e066ed6fbc2f242b8eef8ffd994] <==
	I0929 13:10:45.408593       1 server_linux.go:53] "Using iptables proxy"
	I0929 13:10:45.473373       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 13:10:45.573780       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 13:10:45.573814       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E0929 13:10:45.573890       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 13:10:45.686115       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 13:10:45.686199       1 server_linux.go:132] "Using iptables Proxier"
	I0929 13:10:45.692415       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 13:10:45.692912       1 server.go:527] "Version info" version="v1.34.0"
	I0929 13:10:45.692945       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:10:45.694484       1 config.go:200] "Starting service config controller"
	I0929 13:10:45.694489       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 13:10:45.694529       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 13:10:45.694524       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 13:10:45.694560       1 config.go:106] "Starting endpoint slice config controller"
	I0929 13:10:45.694568       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 13:10:45.694596       1 config.go:309] "Starting node config controller"
	I0929 13:10:45.694610       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 13:10:45.794690       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 13:10:45.794691       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 13:10:45.794702       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 13:10:45.794732       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [86d180d3fafecd80e755e727e2f50ad02bd1ea0707d33e41b1e2c298740f82b2] <==
	I0929 13:09:42.573617       1 server_linux.go:53] "Using iptables proxy"
	I0929 13:09:42.626997       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 13:09:42.727892       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 13:09:42.727933       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E0929 13:09:42.728045       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 13:09:42.751349       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 13:09:42.751405       1 server_linux.go:132] "Using iptables Proxier"
	I0929 13:09:42.757984       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 13:09:42.758424       1 server.go:527] "Version info" version="v1.34.0"
	I0929 13:09:42.758467       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:09:42.760131       1 config.go:309] "Starting node config controller"
	I0929 13:09:42.760172       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 13:09:42.760236       1 config.go:200] "Starting service config controller"
	I0929 13:09:42.760312       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 13:09:42.760413       1 config.go:106] "Starting endpoint slice config controller"
	I0929 13:09:42.760424       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 13:09:42.760439       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 13:09:42.760454       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 13:09:42.860556       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 13:09:42.860585       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 13:09:42.860612       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 13:09:42.860761       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [68115be5aa4bea7e23ff40644b8a834960c63546ff756e90d8e8c11fa12e3f88] <==
	I0929 13:10:43.560778       1 serving.go:386] Generated self-signed cert in-memory
	I0929 13:10:44.375547       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 13:10:44.375650       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:10:44.382626       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 13:10:44.382737       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0929 13:10:44.382759       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0929 13:10:44.382786       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 13:10:44.383044       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:10:44.383070       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:10:44.383278       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 13:10:44.383298       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 13:10:44.483900       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:10:44.483861       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0929 13:10:44.483859       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [f157b54ee5632361a5614f30127b6f5dfc89ff0daa05de53a9f5257c9ebec23a] <==
	E0929 13:09:34.291886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 13:09:34.292306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 13:09:34.292383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 13:09:34.292484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 13:09:34.292547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 13:09:34.292603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 13:09:34.292666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 13:09:34.292724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 13:09:34.292912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 13:09:34.295117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 13:09:34.295317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 13:09:34.297948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 13:09:34.298379       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 13:09:34.299032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 13:09:34.299164       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 13:09:35.104119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 13:09:35.152292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 13:09:35.161472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 13:09:35.199931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 13:09:35.211848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 13:09:35.250936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 13:09:35.282421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 13:09:35.405376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 13:09:35.436484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I0929 13:09:35.883539       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 13:19:03 no-preload-554589 kubelet[602]: E0929 13:19:03.825115     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-45phl" podUID="638c53d3-4825-4387-bb3a-56dd0be70464"
	Sep 29 13:19:06 no-preload-554589 kubelet[602]: E0929 13:19:06.825036     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-95jmk" podUID="010dbb38-5dfe-41e9-a655-0c6d4115135a"
	Sep 29 13:19:08 no-preload-554589 kubelet[602]: I0929 13:19:08.824257     602 scope.go:117] "RemoveContainer" containerID="25c7849b49de4fcce1432876200168fe14f9dd85fba555879aead2e742a23e49"
	Sep 29 13:19:08 no-preload-554589 kubelet[602]: E0929 13:19:08.824502     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mrrdc_kubernetes-dashboard(e9a2588b-38cc-46f8-9d2b-df6e430f476e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrrdc" podUID="e9a2588b-38cc-46f8-9d2b-df6e430f476e"
	Sep 29 13:19:16 no-preload-554589 kubelet[602]: E0929 13:19:16.824541     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-45phl" podUID="638c53d3-4825-4387-bb3a-56dd0be70464"
	Sep 29 13:19:17 no-preload-554589 kubelet[602]: E0929 13:19:17.825505     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-95jmk" podUID="010dbb38-5dfe-41e9-a655-0c6d4115135a"
	Sep 29 13:19:22 no-preload-554589 kubelet[602]: I0929 13:19:22.824407     602 scope.go:117] "RemoveContainer" containerID="25c7849b49de4fcce1432876200168fe14f9dd85fba555879aead2e742a23e49"
	Sep 29 13:19:22 no-preload-554589 kubelet[602]: E0929 13:19:22.824566     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mrrdc_kubernetes-dashboard(e9a2588b-38cc-46f8-9d2b-df6e430f476e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrrdc" podUID="e9a2588b-38cc-46f8-9d2b-df6e430f476e"
	Sep 29 13:19:31 no-preload-554589 kubelet[602]: E0929 13:19:31.825815     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-45phl" podUID="638c53d3-4825-4387-bb3a-56dd0be70464"
	Sep 29 13:19:32 no-preload-554589 kubelet[602]: E0929 13:19:32.824461     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-95jmk" podUID="010dbb38-5dfe-41e9-a655-0c6d4115135a"
	Sep 29 13:19:34 no-preload-554589 kubelet[602]: I0929 13:19:34.824323     602 scope.go:117] "RemoveContainer" containerID="25c7849b49de4fcce1432876200168fe14f9dd85fba555879aead2e742a23e49"
	Sep 29 13:19:34 no-preload-554589 kubelet[602]: E0929 13:19:34.824583     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mrrdc_kubernetes-dashboard(e9a2588b-38cc-46f8-9d2b-df6e430f476e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrrdc" podUID="e9a2588b-38cc-46f8-9d2b-df6e430f476e"
	Sep 29 13:19:43 no-preload-554589 kubelet[602]: E0929 13:19:43.824668     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-45phl" podUID="638c53d3-4825-4387-bb3a-56dd0be70464"
	Sep 29 13:19:47 no-preload-554589 kubelet[602]: E0929 13:19:47.825544     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-95jmk" podUID="010dbb38-5dfe-41e9-a655-0c6d4115135a"
	Sep 29 13:19:49 no-preload-554589 kubelet[602]: I0929 13:19:49.824700     602 scope.go:117] "RemoveContainer" containerID="25c7849b49de4fcce1432876200168fe14f9dd85fba555879aead2e742a23e49"
	Sep 29 13:19:49 no-preload-554589 kubelet[602]: E0929 13:19:49.824919     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mrrdc_kubernetes-dashboard(e9a2588b-38cc-46f8-9d2b-df6e430f476e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrrdc" podUID="e9a2588b-38cc-46f8-9d2b-df6e430f476e"
	Sep 29 13:19:56 no-preload-554589 kubelet[602]: E0929 13:19:56.825225     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-45phl" podUID="638c53d3-4825-4387-bb3a-56dd0be70464"
	Sep 29 13:20:00 no-preload-554589 kubelet[602]: E0929 13:20:00.825130     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-95jmk" podUID="010dbb38-5dfe-41e9-a655-0c6d4115135a"
	Sep 29 13:20:03 no-preload-554589 kubelet[602]: I0929 13:20:03.823812     602 scope.go:117] "RemoveContainer" containerID="25c7849b49de4fcce1432876200168fe14f9dd85fba555879aead2e742a23e49"
	Sep 29 13:20:03 no-preload-554589 kubelet[602]: E0929 13:20:03.824010     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mrrdc_kubernetes-dashboard(e9a2588b-38cc-46f8-9d2b-df6e430f476e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrrdc" podUID="e9a2588b-38cc-46f8-9d2b-df6e430f476e"
	Sep 29 13:20:11 no-preload-554589 kubelet[602]: E0929 13:20:11.825908     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-95jmk" podUID="010dbb38-5dfe-41e9-a655-0c6d4115135a"
	Sep 29 13:20:11 no-preload-554589 kubelet[602]: E0929 13:20:11.825908     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-45phl" podUID="638c53d3-4825-4387-bb3a-56dd0be70464"
	Sep 29 13:20:15 no-preload-554589 kubelet[602]: I0929 13:20:15.824216     602 scope.go:117] "RemoveContainer" containerID="25c7849b49de4fcce1432876200168fe14f9dd85fba555879aead2e742a23e49"
	Sep 29 13:20:15 no-preload-554589 kubelet[602]: E0929 13:20:15.824375     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mrrdc_kubernetes-dashboard(e9a2588b-38cc-46f8-9d2b-df6e430f476e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrrdc" podUID="e9a2588b-38cc-46f8-9d2b-df6e430f476e"
	Sep 29 13:20:24 no-preload-554589 kubelet[602]: E0929 13:20:24.824929     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-95jmk" podUID="010dbb38-5dfe-41e9-a655-0c6d4115135a"
	
	
	==> storage-provisioner [83ac731714bb3c23ec7e41e1d1f4691e6cd1622fc3f74c254590501650ee838a] <==
	I0929 13:10:45.370127       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 13:11:15.372444       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fd251c5d0e78412012c06260873e50df4a3259111f8b772b29a8ce5864a7925c] <==
	W0929 13:20:01.195026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:03.198820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:03.202821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:05.205399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:05.210185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:07.213182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:07.219026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:09.222190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:09.226349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:11.229784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:11.233776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:13.236666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:13.241340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:15.244096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:15.248039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:17.250841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:17.254857       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:19.258613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:19.262483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:21.265815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:21.270776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:23.274752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:23.279656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:25.283451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:20:25.288218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-554589 -n no-preload-554589
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-554589 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-45phl kubernetes-dashboard-855c9754f9-95jmk
helpers_test.go:282: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context no-preload-554589 describe pod metrics-server-746fcd58dc-45phl kubernetes-dashboard-855c9754f9-95jmk
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-554589 describe pod metrics-server-746fcd58dc-45phl kubernetes-dashboard-855c9754f9-95jmk: exit status 1 (65.130541ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-45phl" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-95jmk" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context no-preload-554589 describe pod metrics-server-746fcd58dc-45phl kubernetes-dashboard-855c9754f9-95jmk: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (542.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-r4kbj" [60e7b4a4-451c-4c51-ae68-8a626ab1e1a7] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-495121 -n old-k8s-version-495121
start_stop_delete_test.go:285: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-29 13:28:10.070330264 +0000 UTC m=+4254.654353455
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-495121 describe po kubernetes-dashboard-8694d4445c-r4kbj -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context old-k8s-version-495121 describe po kubernetes-dashboard-8694d4445c-r4kbj -n kubernetes-dashboard:
Name:             kubernetes-dashboard-8694d4445c-r4kbj
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             old-k8s-version-495121/192.168.85.2
Start Time:       Mon, 29 Sep 2025 13:09:45 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=8694d4445c
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-8694d4445c
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bc8ds (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-bc8ds:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-8694d4445c-r4kbj to old-k8s-version-495121
Normal   Pulling    16m (x4 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     16m (x4 over 18m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     16m (x4 over 18m)     kubelet            Error: ErrImagePull
Warning  Failed     16m (x6 over 18m)     kubelet            Error: ImagePullBackOff
Normal   BackOff    3m16s (x63 over 18m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-495121 logs kubernetes-dashboard-8694d4445c-r4kbj -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-495121 logs kubernetes-dashboard-8694d4445c-r4kbj -n kubernetes-dashboard: exit status 1 (71.77225ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-8694d4445c-r4kbj" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context old-k8s-version-495121 logs kubernetes-dashboard-8694d4445c-r4kbj -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-495121 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-495121
helpers_test.go:243: (dbg) docker inspect old-k8s-version-495121:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a1aa9630160311ea6a5f163f3947f826be23ccae30cd89f5dd8458be05c8d52e",
	        "Created": "2025-09-29T13:08:13.162993854Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1421162,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T13:09:23.617275974Z",
	            "FinishedAt": "2025-09-29T13:09:22.71435807Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/a1aa9630160311ea6a5f163f3947f826be23ccae30cd89f5dd8458be05c8d52e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a1aa9630160311ea6a5f163f3947f826be23ccae30cd89f5dd8458be05c8d52e/hostname",
	        "HostsPath": "/var/lib/docker/containers/a1aa9630160311ea6a5f163f3947f826be23ccae30cd89f5dd8458be05c8d52e/hosts",
	        "LogPath": "/var/lib/docker/containers/a1aa9630160311ea6a5f163f3947f826be23ccae30cd89f5dd8458be05c8d52e/a1aa9630160311ea6a5f163f3947f826be23ccae30cd89f5dd8458be05c8d52e-json.log",
	        "Name": "/old-k8s-version-495121",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-495121:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-495121",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a1aa9630160311ea6a5f163f3947f826be23ccae30cd89f5dd8458be05c8d52e",
	                "LowerDir": "/var/lib/docker/overlay2/05eef8c3290607aa741d1676ed15445122e749f396ed59979ed0d88075f40511-init/diff:/var/lib/docker/overlay2/fbd0ff8837aea1062458ef3b6c2ff01f7caaf77470820d108a1f7ca188c98aa7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/05eef8c3290607aa741d1676ed15445122e749f396ed59979ed0d88075f40511/merged",
	                "UpperDir": "/var/lib/docker/overlay2/05eef8c3290607aa741d1676ed15445122e749f396ed59979ed0d88075f40511/diff",
	                "WorkDir": "/var/lib/docker/overlay2/05eef8c3290607aa741d1676ed15445122e749f396ed59979ed0d88075f40511/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-495121",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-495121/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-495121",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-495121",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-495121",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2446c71494f7c1548bf18be8378818cd7d1090004635cda03b07522479e6cd25",
	            "SandboxKey": "/var/run/docker/netns/2446c71494f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33601"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33602"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33605"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33603"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33604"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-495121": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:fd:6d:f2:f2:9a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b8c81c3a8f3b9196bcf906d745c96c3d5c02bfac2fc5ce07f5699ae04f8992ce",
	                    "EndpointID": "2131440c530d6685d8947b302d338a83dd2d78d163419d31ac86388c624a7c9a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-495121",
	                        "a1aa96301603"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-495121 -n old-k8s-version-495121
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-495121 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-495121 logs -n 25: (1.522071872s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p calico-321209 sudo cat /var/lib/kubelet/config.yaml                                                                                                                         │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ ssh     │ -p calico-321209 sudo systemctl status docker --all --full --no-pager                                                                                                          │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │                     │
	│ ssh     │ -p calico-321209 sudo systemctl cat docker --no-pager                                                                                                                          │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ ssh     │ -p calico-321209 sudo cat /etc/docker/daemon.json                                                                                                                              │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │                     │
	│ ssh     │ -p calico-321209 sudo docker system info                                                                                                                                       │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │                     │
	│ ssh     │ -p calico-321209 sudo systemctl status cri-docker --all --full --no-pager                                                                                                      │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │                     │
	│ ssh     │ -p calico-321209 sudo systemctl cat cri-docker --no-pager                                                                                                                      │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ ssh     │ -p calico-321209 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                 │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │                     │
	│ ssh     │ -p calico-321209 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                           │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ ssh     │ -p calico-321209 sudo cri-dockerd --version                                                                                                                                    │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ ssh     │ -p calico-321209 sudo systemctl status containerd --all --full --no-pager                                                                                                      │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ ssh     │ -p calico-321209 sudo systemctl cat containerd --no-pager                                                                                                                      │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ ssh     │ -p calico-321209 sudo cat /lib/systemd/system/containerd.service                                                                                                               │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ ssh     │ -p calico-321209 sudo cat /etc/containerd/config.toml                                                                                                                          │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ ssh     │ -p calico-321209 sudo containerd config dump                                                                                                                                   │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ ssh     │ -p calico-321209 sudo systemctl status crio --all --full --no-pager                                                                                                            │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │                     │
	│ ssh     │ -p calico-321209 sudo systemctl cat crio --no-pager                                                                                                                            │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ ssh     │ -p calico-321209 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                  │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ ssh     │ -p calico-321209 sudo crio config                                                                                                                                              │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ delete  │ -p calico-321209                                                                                                                                                               │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ start   │ -p default-k8s-diff-port-625526 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0 │ default-k8s-diff-port-625526 │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-625526 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ default-k8s-diff-port-625526 │ jenkins │ v1.37.0 │ 29 Sep 25 13:22 UTC │ 29 Sep 25 13:22 UTC │
	│ stop    │ -p default-k8s-diff-port-625526 --alsologtostderr -v=3                                                                                                                         │ default-k8s-diff-port-625526 │ jenkins │ v1.37.0 │ 29 Sep 25 13:22 UTC │ 29 Sep 25 13:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-625526 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                        │ default-k8s-diff-port-625526 │ jenkins │ v1.37.0 │ 29 Sep 25 13:22 UTC │ 29 Sep 25 13:22 UTC │
	│ start   │ -p default-k8s-diff-port-625526 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0 │ default-k8s-diff-port-625526 │ jenkins │ v1.37.0 │ 29 Sep 25 13:22 UTC │ 29 Sep 25 13:23 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 13:22:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 13:22:35.562159 1451965 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:22:35.562420 1451965 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:22:35.562429 1451965 out.go:374] Setting ErrFile to fd 2...
	I0929 13:22:35.562439 1451965 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:22:35.562685 1451965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1097891/.minikube/bin
	I0929 13:22:35.563162 1451965 out.go:368] Setting JSON to false
	I0929 13:22:35.564616 1451965 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":21893,"bootTime":1759130263,"procs":322,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 13:22:35.564699 1451965 start.go:140] virtualization: kvm guest
	I0929 13:22:35.566619 1451965 out.go:179] * [default-k8s-diff-port-625526] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 13:22:35.567736 1451965 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 13:22:35.567742 1451965 notify.go:220] Checking for updates...
	I0929 13:22:35.569658 1451965 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 13:22:35.570709 1451965 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 13:22:35.571748 1451965 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1097891/.minikube
	I0929 13:22:35.572626 1451965 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 13:22:35.573494 1451965 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 13:22:35.574841 1451965 config.go:182] Loaded profile config "default-k8s-diff-port-625526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 13:22:35.575368 1451965 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 13:22:35.599224 1451965 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 13:22:35.599304 1451965 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:22:35.657708 1451965 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 13:22:35.64598813 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:22:35.657798 1451965 docker.go:318] overlay module found
	I0929 13:22:35.659416 1451965 out.go:179] * Using the docker driver based on existing profile
	I0929 13:22:35.660328 1451965 start.go:304] selected driver: docker
	I0929 13:22:35.660342 1451965 start.go:924] validating driver "docker" against &{Name:default-k8s-diff-port-625526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-625526 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:22:35.660421 1451965 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 13:22:35.660921 1451965 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:22:35.718374 1451965 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 13:22:35.70829993 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:22:35.718734 1451965 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:22:35.718784 1451965 cni.go:84] Creating CNI manager for ""
	I0929 13:22:35.718856 1451965 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0929 13:22:35.718916 1451965 start.go:348] cluster config:
	{Name:default-k8s-diff-port-625526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-625526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:22:35.720688 1451965 out.go:179] * Starting "default-k8s-diff-port-625526" primary control-plane node in "default-k8s-diff-port-625526" cluster
	I0929 13:22:35.721651 1451965 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0929 13:22:35.722653 1451965 out.go:179] * Pulling base image v0.0.48 ...
	I0929 13:22:35.723686 1451965 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0929 13:22:35.723739 1451965 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0929 13:22:35.723752 1451965 cache.go:58] Caching tarball of preloaded images
	I0929 13:22:35.723768 1451965 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 13:22:35.723902 1451965 preload.go:172] Found /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0929 13:22:35.723919 1451965 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0929 13:22:35.724077 1451965 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/default-k8s-diff-port-625526/config.json ...
	I0929 13:22:35.745420 1451965 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 13:22:35.745443 1451965 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 13:22:35.745459 1451965 cache.go:232] Successfully downloaded all kic artifacts
	I0929 13:22:35.745485 1451965 start.go:360] acquireMachinesLock for default-k8s-diff-port-625526: {Name:mkf8110bcaaa3c2db4df59e61ce86791500ff674 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:22:35.745543 1451965 start.go:364] duration metric: took 38.921µs to acquireMachinesLock for "default-k8s-diff-port-625526"
	I0929 13:22:35.745567 1451965 start.go:96] Skipping create...Using existing machine configuration
	I0929 13:22:35.745577 1451965 fix.go:54] fixHost starting: 
	I0929 13:22:35.745790 1451965 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-625526 --format={{.State.Status}}
	I0929 13:22:35.765013 1451965 fix.go:112] recreateIfNeeded on default-k8s-diff-port-625526: state=Stopped err=<nil>
	W0929 13:22:35.765054 1451965 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 13:22:35.766704 1451965 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-625526" ...
	I0929 13:22:35.766770 1451965 cli_runner.go:164] Run: docker start default-k8s-diff-port-625526
	I0929 13:22:36.003801 1451965 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-625526 --format={{.State.Status}}
	I0929 13:22:36.024053 1451965 kic.go:430] container "default-k8s-diff-port-625526" state is running.
	I0929 13:22:36.024426 1451965 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-625526
	I0929 13:22:36.042249 1451965 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/default-k8s-diff-port-625526/config.json ...
	I0929 13:22:36.042521 1451965 machine.go:93] provisionDockerMachine start ...
	I0929 13:22:36.042625 1451965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-625526
	I0929 13:22:36.060579 1451965 main.go:141] libmachine: Using SSH client type: native
	I0929 13:22:36.060853 1451965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33621 <nil> <nil>}
	I0929 13:22:36.060868 1451965 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 13:22:36.061549 1451965 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40308->127.0.0.1:33621: read: connection reset by peer
	I0929 13:22:39.200017 1451965 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-625526
	
	I0929 13:22:39.200054 1451965 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-625526"
	I0929 13:22:39.200127 1451965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-625526
	I0929 13:22:39.218057 1451965 main.go:141] libmachine: Using SSH client type: native
	I0929 13:22:39.218316 1451965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33621 <nil> <nil>}
	I0929 13:22:39.218333 1451965 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-625526 && echo "default-k8s-diff-port-625526" | sudo tee /etc/hostname
	I0929 13:22:39.365924 1451965 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-625526
	
	I0929 13:22:39.366029 1451965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-625526
	I0929 13:22:39.383699 1451965 main.go:141] libmachine: Using SSH client type: native
	I0929 13:22:39.383999 1451965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33621 <nil> <nil>}
	I0929 13:22:39.384027 1451965 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-625526' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-625526/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-625526' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 13:22:39.521516 1451965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 13:22:39.521549 1451965 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-1097891/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-1097891/.minikube}
	I0929 13:22:39.521573 1451965 ubuntu.go:190] setting up certificates
	I0929 13:22:39.521588 1451965 provision.go:84] configureAuth start
	I0929 13:22:39.521644 1451965 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-625526
	I0929 13:22:39.541871 1451965 provision.go:143] copyHostCerts
	I0929 13:22:39.541926 1451965 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.pem, removing ...
	I0929 13:22:39.541942 1451965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.pem
	I0929 13:22:39.542032 1451965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.pem (1078 bytes)
	I0929 13:22:39.542172 1451965 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1097891/.minikube/cert.pem, removing ...
	I0929 13:22:39.542188 1451965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1097891/.minikube/cert.pem
	I0929 13:22:39.542232 1451965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-1097891/.minikube/cert.pem (1123 bytes)
	I0929 13:22:39.542370 1451965 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1097891/.minikube/key.pem, removing ...
	I0929 13:22:39.542380 1451965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1097891/.minikube/key.pem
	I0929 13:22:39.542407 1451965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-1097891/.minikube/key.pem (1679 bytes)
	I0929 13:22:39.542504 1451965 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-625526 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-625526 localhost minikube]
	I0929 13:22:39.899274 1451965 provision.go:177] copyRemoteCerts
	I0929 13:22:39.899390 1451965 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 13:22:39.899441 1451965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-625526
	I0929 13:22:39.918183 1451965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33621 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/default-k8s-diff-port-625526/id_rsa Username:docker}
	I0929 13:22:40.018308 1451965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 13:22:40.043190 1451965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0929 13:22:40.068712 1451965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 13:22:40.093973 1451965 provision.go:87] duration metric: took 572.354889ms to configureAuth
	I0929 13:22:40.094003 1451965 ubuntu.go:206] setting minikube options for container-runtime
	I0929 13:22:40.094167 1451965 config.go:182] Loaded profile config "default-k8s-diff-port-625526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 13:22:40.094178 1451965 machine.go:96] duration metric: took 4.051640741s to provisionDockerMachine
	I0929 13:22:40.094185 1451965 start.go:293] postStartSetup for "default-k8s-diff-port-625526" (driver="docker")
	I0929 13:22:40.094198 1451965 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 13:22:40.094245 1451965 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 13:22:40.094288 1451965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-625526
	I0929 13:22:40.111694 1451965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33621 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/default-k8s-diff-port-625526/id_rsa Username:docker}
	I0929 13:22:40.209001 1451965 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 13:22:40.212641 1451965 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 13:22:40.212670 1451965 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 13:22:40.212678 1451965 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 13:22:40.212685 1451965 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 13:22:40.212697 1451965 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1097891/.minikube/addons for local assets ...
	I0929 13:22:40.212752 1451965 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1097891/.minikube/files for local assets ...
	I0929 13:22:40.212846 1451965 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem -> 11014942.pem in /etc/ssl/certs
	I0929 13:22:40.212986 1451965 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 13:22:40.222587 1451965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem --> /etc/ssl/certs/11014942.pem (1708 bytes)
	I0929 13:22:40.247222 1451965 start.go:296] duration metric: took 153.0217ms for postStartSetup
	I0929 13:22:40.247295 1451965 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 13:22:40.247342 1451965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-625526
	I0929 13:22:40.264801 1451965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33621 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/default-k8s-diff-port-625526/id_rsa Username:docker}
	I0929 13:22:40.357129 1451965 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 13:22:40.361626 1451965 fix.go:56] duration metric: took 4.616040318s for fixHost
	I0929 13:22:40.361655 1451965 start.go:83] releasing machines lock for "default-k8s-diff-port-625526", held for 4.616097717s
	I0929 13:22:40.361729 1451965 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-625526
	I0929 13:22:40.378405 1451965 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 13:22:40.378444 1451965 ssh_runner.go:195] Run: cat /version.json
	I0929 13:22:40.378496 1451965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-625526
	I0929 13:22:40.378496 1451965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-625526
	I0929 13:22:40.397193 1451965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33621 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/default-k8s-diff-port-625526/id_rsa Username:docker}
	I0929 13:22:40.397581 1451965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33621 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/default-k8s-diff-port-625526/id_rsa Username:docker}
	I0929 13:22:40.574289 1451965 ssh_runner.go:195] Run: systemctl --version
	I0929 13:22:40.579602 1451965 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 13:22:40.584253 1451965 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 13:22:40.604182 1451965 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 13:22:40.604304 1451965 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 13:22:40.613940 1451965 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 13:22:40.613980 1451965 start.go:495] detecting cgroup driver to use...
	I0929 13:22:40.614017 1451965 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 13:22:40.614061 1451965 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0929 13:22:40.627847 1451965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 13:22:40.640161 1451965 docker.go:218] disabling cri-docker service (if available) ...
	I0929 13:22:40.640222 1451965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 13:22:40.652771 1451965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 13:22:40.664443 1451965 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 13:22:40.726198 1451965 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 13:22:40.790823 1451965 docker.go:234] disabling docker service ...
	I0929 13:22:40.790899 1451965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 13:22:40.803906 1451965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 13:22:40.815938 1451965 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 13:22:40.882333 1451965 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 13:22:40.951101 1451965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 13:22:40.963427 1451965 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 13:22:40.981103 1451965 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 13:22:40.991558 1451965 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 13:22:41.002057 1451965 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0929 13:22:41.002107 1451965 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0929 13:22:41.012427 1451965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 13:22:41.022544 1451965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 13:22:41.032094 1451965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 13:22:41.042013 1451965 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 13:22:41.051147 1451965 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 13:22:41.061030 1451965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 13:22:41.070948 1451965 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 13:22:41.080927 1451965 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 13:22:41.089278 1451965 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 13:22:41.099281 1451965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:22:41.163535 1451965 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 13:22:41.278116 1451965 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0929 13:22:41.278199 1451965 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0929 13:22:41.282644 1451965 start.go:563] Will wait 60s for crictl version
	I0929 13:22:41.282690 1451965 ssh_runner.go:195] Run: which crictl
	I0929 13:22:41.286180 1451965 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 13:22:41.320901 1451965 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0929 13:22:41.321064 1451965 ssh_runner.go:195] Run: containerd --version
	I0929 13:22:41.349623 1451965 ssh_runner.go:195] Run: containerd --version
	I0929 13:22:41.375858 1451965 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0929 13:22:41.376755 1451965 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-625526 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:22:41.395095 1451965 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0929 13:22:41.399088 1451965 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:22:41.411148 1451965 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-625526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-625526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 13:22:41.411284 1451965 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0929 13:22:41.411350 1451965 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 13:22:41.447201 1451965 containerd.go:627] all images are preloaded for containerd runtime.
	I0929 13:22:41.447231 1451965 containerd.go:534] Images already preloaded, skipping extraction
	I0929 13:22:41.447287 1451965 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 13:22:41.481051 1451965 containerd.go:627] all images are preloaded for containerd runtime.
	I0929 13:22:41.481075 1451965 cache_images.go:85] Images are preloaded, skipping loading
	I0929 13:22:41.481083 1451965 kubeadm.go:926] updating node { 192.168.76.2 8444 v1.34.0 containerd true true} ...
	I0929 13:22:41.481179 1451965 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-625526 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-625526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 13:22:41.481237 1451965 ssh_runner.go:195] Run: sudo crictl info
	I0929 13:22:41.517660 1451965 cni.go:84] Creating CNI manager for ""
	I0929 13:22:41.517684 1451965 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0929 13:22:41.517695 1451965 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 13:22:41.517715 1451965 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-625526 NodeName:default-k8s-diff-port-625526 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 13:22:41.517838 1451965 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-625526"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 13:22:41.517897 1451965 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 13:22:41.527574 1451965 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 13:22:41.527629 1451965 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 13:22:41.537293 1451965 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I0929 13:22:41.555637 1451965 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 13:22:41.573591 1451965 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2240 bytes)
	I0929 13:22:41.591658 1451965 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0929 13:22:41.595278 1451965 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:22:41.607252 1451965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:22:41.675385 1451965 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:22:41.704005 1451965 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/default-k8s-diff-port-625526 for IP: 192.168.76.2
	I0929 13:22:41.704025 1451965 certs.go:194] generating shared ca certs ...
	I0929 13:22:41.704042 1451965 certs.go:226] acquiring lock for ca certs: {Name:mk80f04796163f71154dbe6468cabd937b3d9c9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:22:41.704249 1451965 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.key
	I0929 13:22:41.704318 1451965 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/proxy-client-ca.key
	I0929 13:22:41.704335 1451965 certs.go:256] generating profile certs ...
	I0929 13:22:41.704452 1451965 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/default-k8s-diff-port-625526/client.key
	I0929 13:22:41.704546 1451965 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/default-k8s-diff-port-625526/apiserver.key.2204ab67
	I0929 13:22:41.704623 1451965 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/default-k8s-diff-port-625526/proxy-client.key
	I0929 13:22:41.704750 1451965 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/1101494.pem (1338 bytes)
	W0929 13:22:41.704782 1451965 certs.go:480] ignoring /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/1101494_empty.pem, impossibly tiny 0 bytes
	I0929 13:22:41.704792 1451965 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 13:22:41.704813 1451965 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem (1078 bytes)
	I0929 13:22:41.704833 1451965 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem (1123 bytes)
	I0929 13:22:41.704856 1451965 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/key.pem (1679 bytes)
	I0929 13:22:41.704898 1451965 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem (1708 bytes)
	I0929 13:22:41.705467 1451965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 13:22:41.733351 1451965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I0929 13:22:41.761658 1451965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 13:22:41.794570 1451965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 13:22:41.823710 1451965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/default-k8s-diff-port-625526/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0929 13:22:41.849489 1451965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/default-k8s-diff-port-625526/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 13:22:41.874610 1451965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/default-k8s-diff-port-625526/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 13:22:41.900933 1451965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/default-k8s-diff-port-625526/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 13:22:41.928754 1451965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/1101494.pem --> /usr/share/ca-certificates/1101494.pem (1338 bytes)
	I0929 13:22:41.955425 1451965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem --> /usr/share/ca-certificates/11014942.pem (1708 bytes)
	I0929 13:22:41.980458 1451965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 13:22:42.006126 1451965 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 13:22:42.025372 1451965 ssh_runner.go:195] Run: openssl version
	I0929 13:22:42.031314 1451965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 13:22:42.041284 1451965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:22:42.044982 1451965 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 12:18 /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:22:42.045036 1451965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:22:42.051917 1451965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 13:22:42.061256 1451965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1101494.pem && ln -fs /usr/share/ca-certificates/1101494.pem /etc/ssl/certs/1101494.pem"
	I0929 13:22:42.070823 1451965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1101494.pem
	I0929 13:22:42.074491 1451965 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 12:23 /usr/share/ca-certificates/1101494.pem
	I0929 13:22:42.074527 1451965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1101494.pem
	I0929 13:22:42.081354 1451965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1101494.pem /etc/ssl/certs/51391683.0"
	I0929 13:22:42.089976 1451965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11014942.pem && ln -fs /usr/share/ca-certificates/11014942.pem /etc/ssl/certs/11014942.pem"
	I0929 13:22:42.099815 1451965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11014942.pem
	I0929 13:22:42.103425 1451965 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 12:23 /usr/share/ca-certificates/11014942.pem
	I0929 13:22:42.103465 1451965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11014942.pem
	I0929 13:22:42.110334 1451965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11014942.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 13:22:42.119648 1451965 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 13:22:42.123105 1451965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 13:22:42.129778 1451965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 13:22:42.136313 1451965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 13:22:42.142837 1451965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 13:22:42.149398 1451965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 13:22:42.155620 1451965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 13:22:42.161711 1451965 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-625526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-625526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:22:42.161833 1451965 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0929 13:22:42.161895 1451965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 13:22:42.197644 1451965 cri.go:89] found id: "52dd411a810c3fa94118369009907067df080cbb16971d73b517e71e120a8c3e"
	I0929 13:22:42.197666 1451965 cri.go:89] found id: "cccd0878af215753c315b8010e7796bee681fde95858984f9eb55d13b581e86f"
	I0929 13:22:42.197670 1451965 cri.go:89] found id: "8730db506be53d603d6e5354998b77fdfd5825608b87a598d9e5040b46cbeab7"
	I0929 13:22:42.197673 1451965 cri.go:89] found id: "18033609a785b3ef0f90fccb847276575ac648cbaec8ca8696ccd7a559d0ec57"
	I0929 13:22:42.197676 1451965 cri.go:89] found id: "83c9e3b96e2a5728a639dfbb2fdbc7cf856add21b860c97b648938e6beba8b60"
	I0929 13:22:42.197679 1451965 cri.go:89] found id: "bc7b53b8e499d75c0a104765688d458b2210e7543d2d10f7764511a09984e08f"
	I0929 13:22:42.197681 1451965 cri.go:89] found id: "54662278673f59b001060e69dfff2d0a1b8da29b92acfc75fd75584fc542ad3f"
	I0929 13:22:42.197683 1451965 cri.go:89] found id: "8154dc34f513ae97cc439160f222da92653f764331ca537a3530fddeaf8f1933"
	I0929 13:22:42.197686 1451965 cri.go:89] found id: ""
	I0929 13:22:42.197735 1451965 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0929 13:22:42.213278 1451965 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-29T13:22:42Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0929 13:22:42.213349 1451965 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 13:22:42.224292 1451965 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 13:22:42.224314 1451965 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 13:22:42.224366 1451965 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 13:22:42.236105 1451965 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 13:22:42.237582 1451965 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-625526" does not appear in /home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 13:22:42.238651 1451965 kubeconfig.go:62] /home/jenkins/minikube-integration/21652-1097891/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-625526" cluster setting kubeconfig missing "default-k8s-diff-port-625526" context setting]
	I0929 13:22:42.240241 1451965 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/kubeconfig: {Name:mk343611c88fd6ad36810bb377f9a0ca463784db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:22:42.242851 1451965 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 13:22:42.257204 1451965 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.76.2
	I0929 13:22:42.257244 1451965 kubeadm.go:593] duration metric: took 32.922014ms to restartPrimaryControlPlane
	I0929 13:22:42.257256 1451965 kubeadm.go:394] duration metric: took 95.552835ms to StartCluster
	I0929 13:22:42.257280 1451965 settings.go:142] acquiring lock: {Name:mk967ab7b412f5ea13a8bdbc3d08e00d0ec4417f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:22:42.257359 1451965 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 13:22:42.259273 1451965 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/kubeconfig: {Name:mk343611c88fd6ad36810bb377f9a0ca463784db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:22:42.259562 1451965 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0929 13:22:42.259797 1451965 config.go:182] Loaded profile config "default-k8s-diff-port-625526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 13:22:42.259739 1451965 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 13:22:42.259876 1451965 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-625526"
	I0929 13:22:42.259893 1451965 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-625526"
	W0929 13:22:42.259901 1451965 addons.go:247] addon storage-provisioner should already be in state true
	I0929 13:22:42.259927 1451965 host.go:66] Checking if "default-k8s-diff-port-625526" exists ...
	I0929 13:22:42.259933 1451965 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-625526"
	I0929 13:22:42.259977 1451965 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-625526"
	I0929 13:22:42.259976 1451965 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-625526"
	I0929 13:22:42.260003 1451965 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-625526"
	W0929 13:22:42.260016 1451965 addons.go:247] addon metrics-server should already be in state true
	I0929 13:22:42.259954 1451965 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-625526"
	I0929 13:22:42.260065 1451965 host.go:66] Checking if "default-k8s-diff-port-625526" exists ...
	I0929 13:22:42.260106 1451965 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-625526"
	W0929 13:22:42.260122 1451965 addons.go:247] addon dashboard should already be in state true
	I0929 13:22:42.260155 1451965 host.go:66] Checking if "default-k8s-diff-port-625526" exists ...
	I0929 13:22:42.260348 1451965 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-625526 --format={{.State.Status}}
	I0929 13:22:42.260475 1451965 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-625526 --format={{.State.Status}}
	I0929 13:22:42.260574 1451965 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-625526 --format={{.State.Status}}
	I0929 13:22:42.260638 1451965 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-625526 --format={{.State.Status}}
	I0929 13:22:42.261479 1451965 out.go:179] * Verifying Kubernetes components...
	I0929 13:22:42.262461 1451965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:22:42.294767 1451965 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 13:22:42.295917 1451965 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 13:22:42.295943 1451965 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 13:22:42.296054 1451965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-625526
	I0929 13:22:42.298708 1451965 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-625526"
	W0929 13:22:42.298857 1451965 addons.go:247] addon default-storageclass should already be in state true
	I0929 13:22:42.298934 1451965 host.go:66] Checking if "default-k8s-diff-port-625526" exists ...
	I0929 13:22:42.299717 1451965 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-625526 --format={{.State.Status}}
	I0929 13:22:42.305130 1451965 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 13:22:42.305288 1451965 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 13:22:42.306128 1451965 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:22:42.306210 1451965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 13:22:42.306179 1451965 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 13:22:42.306393 1451965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-625526
	I0929 13:22:42.310468 1451965 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 13:22:42.310486 1451965 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 13:22:42.310544 1451965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-625526
	I0929 13:22:42.337561 1451965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33621 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/default-k8s-diff-port-625526/id_rsa Username:docker}
	I0929 13:22:42.338069 1451965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33621 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/default-k8s-diff-port-625526/id_rsa Username:docker}
	I0929 13:22:42.341430 1451965 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 13:22:42.341488 1451965 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 13:22:42.341577 1451965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-625526
	I0929 13:22:42.342299 1451965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33621 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/default-k8s-diff-port-625526/id_rsa Username:docker}
	I0929 13:22:42.362773 1451965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33621 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/default-k8s-diff-port-625526/id_rsa Username:docker}
	I0929 13:22:42.403582 1451965 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:22:42.432418 1451965 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-625526" to be "Ready" ...
	I0929 13:22:42.459190 1451965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:22:42.473562 1451965 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 13:22:42.473646 1451965 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 13:22:42.475810 1451965 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 13:22:42.475828 1451965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 13:22:42.495043 1451965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 13:22:42.504943 1451965 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 13:22:42.504978 1451965 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 13:22:42.507009 1451965 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 13:22:42.507029 1451965 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 13:22:42.537154 1451965 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 13:22:42.537192 1451965 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 13:22:42.547535 1451965 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 13:22:42.547568 1451965 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W0929 13:22:42.557653 1451965 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 13:22:42.557705 1451965 retry.go:31] will retry after 169.469858ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 13:22:42.569727 1451965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 13:22:42.576389 1451965 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 13:22:42.576421 1451965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0929 13:22:42.580887 1451965 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 13:22:42.580927 1451965 retry.go:31] will retry after 328.041075ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 13:22:42.607794 1451965 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 13:22:42.607822 1451965 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0929 13:22:42.638927 1451965 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 13:22:42.638990 1451965 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0929 13:22:42.665139 1451965 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 13:22:42.665187 1451965 retry.go:31] will retry after 311.839228ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 13:22:42.671692 1451965 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 13:22:42.671713 1451965 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 13:22:42.697888 1451965 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 13:22:42.698078 1451965 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 13:22:42.722315 1451965 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:22:42.722347 1451965 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 13:22:42.727887 1451965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:22:42.743562 1451965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:22:42.909726 1451965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0929 13:22:42.978025 1451965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 13:22:44.316417 1451965 node_ready.go:49] node "default-k8s-diff-port-625526" is "Ready"
	I0929 13:22:44.316461 1451965 node_ready.go:38] duration metric: took 1.884006742s for node "default-k8s-diff-port-625526" to be "Ready" ...
	I0929 13:22:44.316481 1451965 api_server.go:52] waiting for apiserver process to appear ...
	I0929 13:22:44.316540 1451965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 13:22:44.905951 1451965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.178015944s)
	I0929 13:22:44.906190 1451965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.996416435s)
	I0929 13:22:44.906176 1451965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.162544974s)
	I0929 13:22:44.906384 1451965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.92832139s)
	I0929 13:22:44.906423 1451965 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-625526"
	I0929 13:22:44.906431 1451965 api_server.go:72] duration metric: took 2.646830656s to wait for apiserver process to appear ...
	I0929 13:22:44.906443 1451965 api_server.go:88] waiting for apiserver healthz status ...
	I0929 13:22:44.906463 1451965 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 13:22:44.907749 1451965 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-625526 addons enable metrics-server
	
	I0929 13:22:44.910907 1451965 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:22:44.910939 1451965 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:22:44.914792 1451965 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0929 13:22:44.915724 1451965 addons.go:514] duration metric: took 2.655993064s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0929 13:22:45.406864 1451965 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 13:22:45.412119 1451965 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:22:45.412147 1451965 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:22:45.906689 1451965 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 13:22:45.911025 1451965 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:22:45.911077 1451965 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:22:46.406674 1451965 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 13:22:46.411198 1451965 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:22:46.411228 1451965 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:22:46.906852 1451965 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 13:22:46.911189 1451965 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0929 13:22:46.912247 1451965 api_server.go:141] control plane version: v1.34.0
	I0929 13:22:46.912283 1451965 api_server.go:131] duration metric: took 2.005832949s to wait for apiserver health ...
	I0929 13:22:46.912293 1451965 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 13:22:46.916143 1451965 system_pods.go:59] 9 kube-system pods found
	I0929 13:22:46.916187 1451965 system_pods.go:61] "coredns-66bc5c9577-cw5kk" [5b49586a-3bb5-48d7-b06a-609ac93af91f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:22:46.916203 1451965 system_pods.go:61] "etcd-default-k8s-diff-port-625526" [2d0b05ff-77e6-4f8c-b9ab-a9e810f90e03] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:22:46.916215 1451965 system_pods.go:61] "kindnet-mg2cv" [be7e29e8-85e3-4d1c-97c8-b06172f6acd1] Running
	I0929 13:22:46.916224 1451965 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-625526" [3b59c180-6ad0-4e7e-846c-2d36b305302b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:22:46.916235 1451965 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-625526" [1ac0c47c-7207-41e4-8ba7-6d26aca0a598] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:22:46.916244 1451965 system_pods.go:61] "kube-proxy-pttl4" [42ea995d-bf9d-43ac-925a-0099d0a73ff6] Running
	I0929 13:22:46.916287 1451965 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-625526" [c2b2a123-3ebe-4527-bf25-a88016fb3149] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:22:46.916308 1451965 system_pods.go:61] "metrics-server-746fcd58dc-k2ghw" [c11a1fa7-c21f-47af-980f-7b1b08f6cf57] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 13:22:46.916344 1451965 system_pods.go:61] "storage-provisioner" [f83a8afe-6adf-4ece-b2f0-73fbc4936418] Running
	I0929 13:22:46.916352 1451965 system_pods.go:74] duration metric: took 4.052832ms to wait for pod list to return data ...
	I0929 13:22:46.916360 1451965 default_sa.go:34] waiting for default service account to be created ...
	I0929 13:22:46.918923 1451965 default_sa.go:45] found service account: "default"
	I0929 13:22:46.918943 1451965 default_sa.go:55] duration metric: took 2.577376ms for default service account to be created ...
	I0929 13:22:46.918951 1451965 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 13:22:46.921574 1451965 system_pods.go:86] 9 kube-system pods found
	I0929 13:22:46.921603 1451965 system_pods.go:89] "coredns-66bc5c9577-cw5kk" [5b49586a-3bb5-48d7-b06a-609ac93af91f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:22:46.921616 1451965 system_pods.go:89] "etcd-default-k8s-diff-port-625526" [2d0b05ff-77e6-4f8c-b9ab-a9e810f90e03] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:22:46.921624 1451965 system_pods.go:89] "kindnet-mg2cv" [be7e29e8-85e3-4d1c-97c8-b06172f6acd1] Running
	I0929 13:22:46.921630 1451965 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-625526" [3b59c180-6ad0-4e7e-846c-2d36b305302b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:22:46.921637 1451965 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-625526" [1ac0c47c-7207-41e4-8ba7-6d26aca0a598] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:22:46.921640 1451965 system_pods.go:89] "kube-proxy-pttl4" [42ea995d-bf9d-43ac-925a-0099d0a73ff6] Running
	I0929 13:22:46.921645 1451965 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-625526" [c2b2a123-3ebe-4527-bf25-a88016fb3149] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:22:46.921652 1451965 system_pods.go:89] "metrics-server-746fcd58dc-k2ghw" [c11a1fa7-c21f-47af-980f-7b1b08f6cf57] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 13:22:46.921656 1451965 system_pods.go:89] "storage-provisioner" [f83a8afe-6adf-4ece-b2f0-73fbc4936418] Running
	I0929 13:22:46.921665 1451965 system_pods.go:126] duration metric: took 2.708849ms to wait for k8s-apps to be running ...
	I0929 13:22:46.921672 1451965 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 13:22:46.921714 1451965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 13:22:46.934436 1451965 system_svc.go:56] duration metric: took 12.754747ms WaitForService to wait for kubelet
	I0929 13:22:46.934460 1451965 kubeadm.go:578] duration metric: took 4.674861301s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:22:46.934495 1451965 node_conditions.go:102] verifying NodePressure condition ...
	I0929 13:22:46.936841 1451965 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 13:22:46.936864 1451965 node_conditions.go:123] node cpu capacity is 8
	I0929 13:22:46.936876 1451965 node_conditions.go:105] duration metric: took 2.372243ms to run NodePressure ...
	I0929 13:22:46.936889 1451965 start.go:241] waiting for startup goroutines ...
	I0929 13:22:46.936901 1451965 start.go:246] waiting for cluster config update ...
	I0929 13:22:46.936918 1451965 start.go:255] writing updated cluster config ...
	I0929 13:22:46.937264 1451965 ssh_runner.go:195] Run: rm -f paused
	I0929 13:22:46.940828 1451965 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:22:46.944291 1451965 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cw5kk" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 13:22:48.949639 1451965 pod_ready.go:104] pod "coredns-66bc5c9577-cw5kk" is not "Ready", error: <nil>
	W0929 13:22:50.950071 1451965 pod_ready.go:104] pod "coredns-66bc5c9577-cw5kk" is not "Ready", error: <nil>
	W0929 13:22:52.950926 1451965 pod_ready.go:104] pod "coredns-66bc5c9577-cw5kk" is not "Ready", error: <nil>
	W0929 13:22:54.986123 1451965 pod_ready.go:104] pod "coredns-66bc5c9577-cw5kk" is not "Ready", error: <nil>
	W0929 13:22:57.449704 1451965 pod_ready.go:104] pod "coredns-66bc5c9577-cw5kk" is not "Ready", error: <nil>
	W0929 13:22:59.950006 1451965 pod_ready.go:104] pod "coredns-66bc5c9577-cw5kk" is not "Ready", error: <nil>
	W0929 13:23:01.950103 1451965 pod_ready.go:104] pod "coredns-66bc5c9577-cw5kk" is not "Ready", error: <nil>
	W0929 13:23:04.449323 1451965 pod_ready.go:104] pod "coredns-66bc5c9577-cw5kk" is not "Ready", error: <nil>
	W0929 13:23:06.450449 1451965 pod_ready.go:104] pod "coredns-66bc5c9577-cw5kk" is not "Ready", error: <nil>
	W0929 13:23:08.949668 1451965 pod_ready.go:104] pod "coredns-66bc5c9577-cw5kk" is not "Ready", error: <nil>
	W0929 13:23:11.449887 1451965 pod_ready.go:104] pod "coredns-66bc5c9577-cw5kk" is not "Ready", error: <nil>
	W0929 13:23:13.949775 1451965 pod_ready.go:104] pod "coredns-66bc5c9577-cw5kk" is not "Ready", error: <nil>
	W0929 13:23:16.449878 1451965 pod_ready.go:104] pod "coredns-66bc5c9577-cw5kk" is not "Ready", error: <nil>
	W0929 13:23:18.450120 1451965 pod_ready.go:104] pod "coredns-66bc5c9577-cw5kk" is not "Ready", error: <nil>
	I0929 13:23:19.449727 1451965 pod_ready.go:94] pod "coredns-66bc5c9577-cw5kk" is "Ready"
	I0929 13:23:19.449758 1451965 pod_ready.go:86] duration metric: took 32.505449524s for pod "coredns-66bc5c9577-cw5kk" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:23:19.452052 1451965 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-625526" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:23:19.455652 1451965 pod_ready.go:94] pod "etcd-default-k8s-diff-port-625526" is "Ready"
	I0929 13:23:19.455673 1451965 pod_ready.go:86] duration metric: took 3.599129ms for pod "etcd-default-k8s-diff-port-625526" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:23:19.457529 1451965 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-625526" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:23:19.461146 1451965 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-625526" is "Ready"
	I0929 13:23:19.461164 1451965 pod_ready.go:86] duration metric: took 3.616747ms for pod "kube-apiserver-default-k8s-diff-port-625526" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:23:19.462942 1451965 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-625526" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:23:19.648559 1451965 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-625526" is "Ready"
	I0929 13:23:19.648591 1451965 pod_ready.go:86] duration metric: took 185.630355ms for pod "kube-controller-manager-default-k8s-diff-port-625526" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:23:19.848705 1451965 pod_ready.go:83] waiting for pod "kube-proxy-pttl4" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:23:20.248629 1451965 pod_ready.go:94] pod "kube-proxy-pttl4" is "Ready"
	I0929 13:23:20.248657 1451965 pod_ready.go:86] duration metric: took 399.920965ms for pod "kube-proxy-pttl4" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:23:20.448511 1451965 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-625526" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:23:20.848385 1451965 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-625526" is "Ready"
	I0929 13:23:20.848415 1451965 pod_ready.go:86] duration metric: took 399.878755ms for pod "kube-scheduler-default-k8s-diff-port-625526" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:23:20.848429 1451965 pod_ready.go:40] duration metric: took 33.907571648s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:23:20.895862 1451965 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 13:23:20.897538 1451965 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-625526" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	f2a222fea4519       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   8                   ec6453a6e5b12       dashboard-metrics-scraper-5f989dc9cf-5f27f
	a038fdcf70d29       6e38f40d628db       17 minutes ago      Running             storage-provisioner         2                   619afdfc702d3       storage-provisioner
	07ebeb94d9201       409467f978b4a       18 minutes ago      Running             kindnet-cni                 1                   5862d4728adcf       kindnet-tjgv6
	fbd6fd44a800d       56cc512116c8f       18 minutes ago      Running             busybox                     1                   a2045c4c9d3f9       busybox
	751d011dd2b4f       6e38f40d628db       18 minutes ago      Exited              storage-provisioner         1                   619afdfc702d3       storage-provisioner
	9cb18fdc41964       ead0a4a53df89       18 minutes ago      Running             coredns                     1                   387d4f18250f3       coredns-5dd5756b68-lrkcg
	94774ca53ecc3       ea1030da44aa1       18 minutes ago      Running             kube-proxy                  1                   06e6ebaf5e3c4       kube-proxy-9nsk9
	409cbf88accf8       bb5e0dde9054c       18 minutes ago      Running             kube-apiserver              1                   31dbc9330bff1       kube-apiserver-old-k8s-version-495121
	8d916df956e81       f6f496300a2ae       18 minutes ago      Running             kube-scheduler              1                   b5ea8e2ae6437       kube-scheduler-old-k8s-version-495121
	b75d2434561ff       4be79c38a4bab       18 minutes ago      Running             kube-controller-manager     1                   b57d4294bc4b4       kube-controller-manager-old-k8s-version-495121
	693060a993d1f       73deb9a3f7025       18 minutes ago      Running             etcd                        1                   ffbcadea4319c       etcd-old-k8s-version-495121
	b94dcb6279ab7       56cc512116c8f       19 minutes ago      Exited              busybox                     0                   471bb9570b0a6       busybox
	2ab90d73c849e       ead0a4a53df89       19 minutes ago      Exited              coredns                     0                   31ac04d1de38d       coredns-5dd5756b68-lrkcg
	713c1f40cb688       409467f978b4a       19 minutes ago      Exited              kindnet-cni                 0                   f1f2ce685864b       kindnet-tjgv6
	e08df30dda564       ea1030da44aa1       19 minutes ago      Exited              kube-proxy                  0                   0e6af22aff9b5       kube-proxy-9nsk9
	93fa1c32e856e       4be79c38a4bab       19 minutes ago      Exited              kube-controller-manager     0                   731628c0d3955       kube-controller-manager-old-k8s-version-495121
	17aad6d9a070d       f6f496300a2ae       19 minutes ago      Exited              kube-scheduler              0                   7c4c96e49175b       kube-scheduler-old-k8s-version-495121
	ce444d8ceca3b       bb5e0dde9054c       19 minutes ago      Exited              kube-apiserver              0                   a9884e42f1299       kube-apiserver-old-k8s-version-495121
	a6912c13f2e79       73deb9a3f7025       19 minutes ago      Exited              etcd                        0                   1a68764da09b3       etcd-old-k8s-version-495121
	
	
	==> containerd <==
	Sep 29 13:20:43 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:20:43.245132357Z" level=info msg="RemoveContainer for \"e3b2666b123b9b0f81a7035bbb94e1aea9ad2b981195663623ef0bbe30cc8677\" returns successfully"
	Sep 29 13:20:53 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:20:53.642197418Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 29 13:20:53 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:20:53.643843202Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 13:20:54 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:20:54.295250662Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 13:20:56 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:20:56.149909143Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 13:20:56 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:20:56.149987570Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11014"
	Sep 29 13:25:04 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:25:04.639733723Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 29 13:25:04 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:25:04.677600927Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Sep 29 13:25:04 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:25:04.678847665Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 13:25:04 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:25:04.678936201Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 29 13:25:51 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:25:51.640705729Z" level=info msg="CreateContainer within sandbox \"ec6453a6e5b12ffed1e8ad5f07111a62816a635458bbe12ce045e40f1b07e3d0\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,}"
	Sep 29 13:25:51 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:25:51.650515989Z" level=info msg="CreateContainer within sandbox \"ec6453a6e5b12ffed1e8ad5f07111a62816a635458bbe12ce045e40f1b07e3d0\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"f2a222fea451954184c6e7e91d00c517086d018b29b84ee9d530d99b40f9c442\""
	Sep 29 13:25:51 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:25:51.651004563Z" level=info msg="StartContainer for \"f2a222fea451954184c6e7e91d00c517086d018b29b84ee9d530d99b40f9c442\""
	Sep 29 13:25:51 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:25:51.705296468Z" level=info msg="StartContainer for \"f2a222fea451954184c6e7e91d00c517086d018b29b84ee9d530d99b40f9c442\" returns successfully"
	Sep 29 13:25:51 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:25:51.717688439Z" level=info msg="received exit event container_id:\"f2a222fea451954184c6e7e91d00c517086d018b29b84ee9d530d99b40f9c442\"  id:\"f2a222fea451954184c6e7e91d00c517086d018b29b84ee9d530d99b40f9c442\"  pid:3372  exit_status:1  exited_at:{seconds:1759152351  nanos:717483380}"
	Sep 29 13:25:51 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:25:51.740374683Z" level=info msg="shim disconnected" id=f2a222fea451954184c6e7e91d00c517086d018b29b84ee9d530d99b40f9c442 namespace=k8s.io
	Sep 29 13:25:51 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:25:51.740410635Z" level=warning msg="cleaning up after shim disconnected" id=f2a222fea451954184c6e7e91d00c517086d018b29b84ee9d530d99b40f9c442 namespace=k8s.io
	Sep 29 13:25:51 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:25:51.740421388Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 29 13:25:51 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:25:51.944348235Z" level=info msg="RemoveContainer for \"de8a21b0c83c880365b2b14bc4c41170ef55e6853e85aa642866ec65c32b5355\""
	Sep 29 13:25:51 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:25:51.947907349Z" level=info msg="RemoveContainer for \"de8a21b0c83c880365b2b14bc4c41170ef55e6853e85aa642866ec65c32b5355\" returns successfully"
	Sep 29 13:26:02 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:26:02.640318354Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 29 13:26:02 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:26:02.641834383Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 13:26:03 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:26:03.294351760Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 13:26:05 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:26:05.160265494Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 13:26:05 old-k8s-version-495121 containerd[476]: time="2025-09-29T13:26:05.160277964Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	
	
	==> coredns [2ab90d73c849e3b421e70c032ef293fdaac96e068a9e25b6496ff8474b906234] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:37065 - 11611 "HINFO IN 4379644100506618813.14631479693037293. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.033617954s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9cb18fdc419640ecd1f971b932e5008b1e25aba3a2c8082a6f7578eb631baae8] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59790 - 9195 "HINFO IN 5803060664264663451.3040212327738087743. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027895818s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-495121
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-495121
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=old-k8s-version-495121
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T13_08_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 13:08:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-495121
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 13:28:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 13:25:23 +0000   Mon, 29 Sep 2025 13:08:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 13:25:23 +0000   Mon, 29 Sep 2025 13:08:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 13:25:23 +0000   Mon, 29 Sep 2025 13:08:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 13:25:23 +0000   Mon, 29 Sep 2025 13:08:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-495121
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 060a6b1b9edf42629635d8c7738efe8f
	  System UUID:                3ad91b0c-af3b-4a9d-8939-9ef7555c85d9
	  Boot ID:                    c950b162-3ea4-4410-8c2e-1238f18b29b9
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-5dd5756b68-lrkcg                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-old-k8s-version-495121                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-tjgv6                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-old-k8s-version-495121             250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-old-k8s-version-495121    200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-9nsk9                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-old-k8s-version-495121             100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-57f55c9bc5-t2mql                   100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         19m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-5f27f        0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-r4kbj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientPID     19m                kubelet          Node old-k8s-version-495121 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node old-k8s-version-495121 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node old-k8s-version-495121 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           19m                node-controller  Node old-k8s-version-495121 event: Registered Node old-k8s-version-495121 in Controller
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node old-k8s-version-495121 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node old-k8s-version-495121 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node old-k8s-version-495121 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node old-k8s-version-495121 event: Registered Node old-k8s-version-495121 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 a1 f4 28 81 a8 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2e 2f bb 72 d0 bd 08 06
	[  +6.778142] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 83 71 a8 41 1d 08 06
	[  +0.096747] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 43 49 e5 fd fa 08 06
	[Sep29 13:07] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 2d 17 7b b6 88 08 06
	[  +0.000371] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5a 43 49 e5 fd fa 08 06
	[ +37.870699] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 36 61 5e 36 d0 11 08 06
	[Sep29 13:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 3c ea 5f b8 68 08 06
	[  +0.009082] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 a0 7d 1d f4 ea 08 06
	[ +10.861380] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 60 01 bb bd e5 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 61 5e 36 d0 11 08 06
	[ +36.402844] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 73 32 f4 f1 e6 08 06
	[  +0.000316] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 3c ea 5f b8 68 08 06
	
	
	==> etcd [693060a993d1fef0eb00234c496e6e3658bb2fd678e27409bfda5ead0bcfce1e] <==
	{"level":"info","ts":"2025-09-29T13:09:30.57284Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-09-29T13:09:31.755203Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-29T13:09:31.755243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-29T13:09:31.755273Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-09-29T13:09:31.755293Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-09-29T13:09:31.755301Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-09-29T13:09:31.755317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-09-29T13:09:31.75533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-09-29T13:09:31.756209Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-495121 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-29T13:09:31.756277Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T13:09:31.756329Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T13:09:31.756437Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-29T13:09:31.756504Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-29T13:09:31.757409Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-29T13:09:31.75758Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-09-29T13:09:50.007936Z","caller":"traceutil/trace.go:171","msg":"trace[1118325892] transaction","detail":"{read_only:false; response_revision:637; number_of_response:1; }","duration":"158.422839ms","start":"2025-09-29T13:09:49.849495Z","end":"2025-09-29T13:09:50.007918Z","steps":["trace[1118325892] 'process raft request'  (duration: 85.206887ms)","trace[1118325892] 'compare'  (duration: 73.131049ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-29T13:19:31.772408Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":951}
	{"level":"info","ts":"2025-09-29T13:19:31.774073Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":951,"took":"1.370032ms","hash":2617858120}
	{"level":"info","ts":"2025-09-29T13:19:31.774106Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2617858120,"revision":951,"compact-revision":-1}
	{"level":"info","ts":"2025-09-29T13:21:34.431717Z","caller":"traceutil/trace.go:171","msg":"trace[2044776756] transaction","detail":"{read_only:false; response_revision:1309; number_of_response:1; }","duration":"146.560185ms","start":"2025-09-29T13:21:34.285129Z","end":"2025-09-29T13:21:34.43169Z","steps":["trace[2044776756] 'process raft request'  (duration: 125.489216ms)","trace[2044776756] 'compare'  (duration: 20.980596ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T13:21:34.70708Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.066914ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596038336924455 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1308 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1034 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-29T13:21:34.707164Z","caller":"traceutil/trace.go:171","msg":"trace[497264248] transaction","detail":"{read_only:false; response_revision:1310; number_of_response:1; }","duration":"211.332476ms","start":"2025-09-29T13:21:34.495817Z","end":"2025-09-29T13:21:34.70715Z","steps":["trace[497264248] 'process raft request'  (duration: 85.601263ms)","trace[497264248] 'compare'  (duration: 124.955363ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-29T13:24:31.777513Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1200}
	{"level":"info","ts":"2025-09-29T13:24:31.779478Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1200,"took":"1.614217ms","hash":643000283}
	{"level":"info","ts":"2025-09-29T13:24:31.779541Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":643000283,"revision":1200,"compact-revision":951}
	
	
	==> etcd [a6912c13f2e79c2ffb3a0b3f3bbb9acba46a19a6e9f51d9f98fcfda1050fa001] <==
	{"level":"info","ts":"2025-09-29T13:08:21.865712Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-09-29T13:08:21.865724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-09-29T13:08:21.865734Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-09-29T13:08:21.866654Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-495121 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-29T13:08:21.866881Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T13:08:21.86699Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T13:08:21.867007Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T13:08:21.867113Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-29T13:08:21.867151Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-29T13:08:21.868469Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-29T13:08:21.868474Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-09-29T13:08:21.869471Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T13:08:21.872353Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T13:08:21.87239Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T13:08:37.323352Z","caller":"traceutil/trace.go:171","msg":"trace[959236800] linearizableReadLoop","detail":"{readStateIndex:293; appliedIndex:292; }","duration":"103.454638ms","start":"2025-09-29T13:08:37.219877Z","end":"2025-09-29T13:08:37.323332Z","steps":["trace[959236800] 'read index received'  (duration: 103.309711ms)","trace[959236800] 'applied index is now lower than readState.Index'  (duration: 144.255µs)"],"step_count":2}
	{"level":"info","ts":"2025-09-29T13:08:37.323448Z","caller":"traceutil/trace.go:171","msg":"trace[620451919] transaction","detail":"{read_only:false; response_revision:281; number_of_response:1; }","duration":"121.986176ms","start":"2025-09-29T13:08:37.201438Z","end":"2025-09-29T13:08:37.323424Z","steps":["trace[620451919] 'process raft request'  (duration: 121.789257ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T13:08:37.323625Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.704449ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T13:08:37.323683Z","caller":"traceutil/trace.go:171","msg":"trace[85859342] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:281; }","duration":"103.823279ms","start":"2025-09-29T13:08:37.219839Z","end":"2025-09-29T13:08:37.323662Z","steps":["trace[85859342] 'agreement among raft nodes before linearized reading'  (duration: 103.60334ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T13:08:38.407635Z","caller":"traceutil/trace.go:171","msg":"trace[933570630] linearizableReadLoop","detail":"{readStateIndex:297; appliedIndex:296; }","duration":"187.263359ms","start":"2025-09-29T13:08:38.220358Z","end":"2025-09-29T13:08:38.407621Z","steps":["trace[933570630] 'read index received'  (duration: 187.121481ms)","trace[933570630] 'applied index is now lower than readState.Index'  (duration: 141.579µs)"],"step_count":2}
	{"level":"info","ts":"2025-09-29T13:08:38.407678Z","caller":"traceutil/trace.go:171","msg":"trace[1940461009] transaction","detail":"{read_only:false; response_revision:285; number_of_response:1; }","duration":"190.402332ms","start":"2025-09-29T13:08:38.217254Z","end":"2025-09-29T13:08:38.407656Z","steps":["trace[1940461009] 'process raft request'  (duration: 190.257703ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T13:08:38.407799Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.443602ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T13:08:38.407838Z","caller":"traceutil/trace.go:171","msg":"trace[608446229] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:285; }","duration":"187.499358ms","start":"2025-09-29T13:08:38.220329Z","end":"2025-09-29T13:08:38.407829Z","steps":["trace[608446229] 'agreement among raft nodes before linearized reading'  (duration: 187.363643ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T13:08:38.812102Z","caller":"traceutil/trace.go:171","msg":"trace[1712206487] transaction","detail":"{read_only:false; response_revision:288; number_of_response:1; }","duration":"160.789593ms","start":"2025-09-29T13:08:38.651278Z","end":"2025-09-29T13:08:38.812067Z","steps":["trace[1712206487] 'process raft request'  (duration: 89.251367ms)","trace[1712206487] 'compare'  (duration: 71.239129ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T13:08:39.05647Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.835854ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-09-29T13:08:39.056542Z","caller":"traceutil/trace.go:171","msg":"trace[1242313453] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:289; }","duration":"105.932451ms","start":"2025-09-29T13:08:38.950595Z","end":"2025-09-29T13:08:39.056528Z","steps":["trace[1242313453] 'range keys from in-memory index tree'  (duration: 105.739359ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:28:11 up  6:10,  0 users,  load average: 0.23, 0.50, 1.06
	Linux old-k8s-version-495121 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [07ebeb94d9201d6034e8db286dd4633125bfa4b328ce93168b6d0df2a5dc09f2] <==
	I0929 13:26:05.109454       1 main.go:301] handling current node
	I0929 13:26:15.117078       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:26:15.117109       1 main.go:301] handling current node
	I0929 13:26:25.113726       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:26:25.113765       1 main.go:301] handling current node
	I0929 13:26:35.108668       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:26:35.108716       1 main.go:301] handling current node
	I0929 13:26:45.116850       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:26:45.116883       1 main.go:301] handling current node
	I0929 13:26:55.113197       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:26:55.113261       1 main.go:301] handling current node
	I0929 13:27:05.108552       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:27:05.108722       1 main.go:301] handling current node
	I0929 13:27:15.115469       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:27:15.115500       1 main.go:301] handling current node
	I0929 13:27:25.117037       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:27:25.117072       1 main.go:301] handling current node
	I0929 13:27:35.109063       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:27:35.109093       1 main.go:301] handling current node
	I0929 13:27:45.117099       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:27:45.117137       1 main.go:301] handling current node
	I0929 13:27:55.118115       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:27:55.118146       1 main.go:301] handling current node
	I0929 13:28:05.112421       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:28:05.112451       1 main.go:301] handling current node
	
	
	==> kindnet [713c1f40cb6884c94b00822e0e97263626264336dcaac97047f555996036cff6] <==
	I0929 13:08:44.762790       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0929 13:08:44.763123       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0929 13:08:44.763310       1 main.go:148] setting mtu 1500 for CNI 
	I0929 13:08:44.763327       1 main.go:178] kindnetd IP family: "ipv4"
	I0929 13:08:44.763349       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-29T13:08:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0929 13:08:44.963750       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0929 13:08:44.963795       1 controller.go:381] "Waiting for informer caches to sync"
	I0929 13:08:44.963809       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0929 13:08:45.149140       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0929 13:08:45.349204       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0929 13:08:45.349238       1 metrics.go:72] Registering metrics
	I0929 13:08:45.349329       1 controller.go:711] "Syncing nftables rules"
	I0929 13:08:54.970059       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:08:54.970105       1 main.go:301] handling current node
	I0929 13:09:04.964036       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 13:09:04.964087       1 main.go:301] handling current node
	
	
	==> kube-apiserver [409cbf88accf80c713654c9bd05bebd979bd2fb817752533f6109a85c1272d91] <==
	E0929 13:24:33.972324       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0929 13:24:33.972336       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0929 13:24:33.972374       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0929 13:24:33.973499       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:25:32.861356       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.108.57.16:443: connect: connection refused
	I0929 13:25:32.861382       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0929 13:25:33.973197       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 13:25:33.973238       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0929 13:25:33.973249       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 13:25:33.974311       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 13:25:33.974400       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0929 13:25:33.974414       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:26:32.861746       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.108.57.16:443: connect: connection refused
	I0929 13:26:32.861772       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0929 13:27:32.861475       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.108.57.16:443: connect: connection refused
	I0929 13:27:32.861499       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0929 13:27:33.974057       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 13:27:33.974086       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0929 13:27:33.974094       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 13:27:33.975160       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 13:27:33.975223       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0929 13:27:33.975238       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [ce444d8ceca3be0406177949fc0a18555192824013e62d576c73df73c7e3426d] <==
	I0929 13:08:24.944113       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0929 13:08:25.490054       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0929 13:08:26.592748       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0929 13:08:26.605185       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0929 13:08:26.615055       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0929 13:08:40.007747       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0929 13:08:40.408561       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	W0929 13:09:10.099906       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 13:09:10.100007       1 controller.go:135] adding "v1beta1.metrics.k8s.io" to AggregationController failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0929 13:09:10.100320       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 service unavailable
	I0929 13:09:10.100347       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0929 13:09:10.106812       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 13:09:10.106920       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0929 13:09:10.107001       1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0929 13:09:10.107035       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 service unavailable
	I0929 13:09:10.107049       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0929 13:09:10.189775       1 alloc.go:330] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.108.57.16"}
	W0929 13:09:10.195135       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 13:09:10.195309       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0929 13:09:10.206313       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 13:09:10.206388       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	
	
	==> kube-controller-manager [93fa1c32e856e877fc3536e7e9322026ec081194ee396a0ca36f2bb324e84e70] <==
	I0929 13:08:40.432735       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9nsk9"
	I0929 13:08:40.435597       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-tjgv6"
	I0929 13:08:40.569008       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-lrkcg"
	I0929 13:08:40.582074       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-r6nct"
	I0929 13:08:40.594310       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="582.031636ms"
	I0929 13:08:40.608794       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.413785ms"
	I0929 13:08:40.609163       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="206.997µs"
	I0929 13:08:40.617249       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="138.581µs"
	I0929 13:08:40.716594       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0929 13:08:40.724664       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-r6nct"
	I0929 13:08:40.730531       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.155928ms"
	I0929 13:08:40.736467       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="5.858703ms"
	I0929 13:08:40.736578       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="60.062µs"
	I0929 13:08:42.752078       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="1.170743ms"
	I0929 13:08:42.759806       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="131.309µs"
	I0929 13:08:42.761460       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="87.349µs"
	I0929 13:08:56.764571       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="166.733µs"
	I0929 13:08:56.788431       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.057469ms"
	I0929 13:08:56.788570       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.87µs"
	I0929 13:09:10.124733       1 event.go:307] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-57f55c9bc5 to 1"
	I0929 13:09:10.132802       1 event.go:307] "Event occurred" object="kube-system/metrics-server-57f55c9bc5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-57f55c9bc5-t2mql"
	I0929 13:09:10.140882       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="17.197814ms"
	I0929 13:09:10.149887       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="8.712376ms"
	I0929 13:09:10.167645       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="17.584587ms"
	I0929 13:09:10.167743       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="58.543µs"
	
	
	==> kube-controller-manager [b75d2434561ffecb03a963adac8a285eab3878a2ad25c7a9c81b1fbf6cdef6e7] <==
	I0929 13:23:15.954067       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 13:23:45.542392       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:23:45.960600       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 13:24:15.546445       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:24:15.967467       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 13:24:45.550509       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:24:45.974826       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 13:25:15.555543       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:25:15.982330       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 13:25:18.649702       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="116.459µs"
	I0929 13:25:31.648810       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="112.025µs"
	E0929 13:25:45.560339       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:25:45.988748       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 13:25:51.953577       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="89.838µs"
	I0929 13:25:55.693921       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="98.627µs"
	E0929 13:26:15.564902       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:26:15.995116       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 13:26:17.648915       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="102.021µs"
	I0929 13:26:28.648375       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="125.93µs"
	E0929 13:26:45.569903       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:26:46.001771       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 13:27:15.573637       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:27:16.008211       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 13:27:45.578253       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 13:27:46.015055       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [94774ca53ecc33a65be8ebcbbf981446c867be0a265707362ea3cac13b0a4e18] <==
	I0929 13:09:34.428841       1 server_others.go:69] "Using iptables proxy"
	I0929 13:09:34.443187       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I0929 13:09:34.479764       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 13:09:34.482643       1 server_others.go:152] "Using iptables Proxier"
	I0929 13:09:34.482725       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0929 13:09:34.482746       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0929 13:09:34.482783       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0929 13:09:34.483238       1 server.go:846] "Version info" version="v1.28.0"
	I0929 13:09:34.483321       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:09:34.483935       1 config.go:97] "Starting endpoint slice config controller"
	I0929 13:09:34.484005       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0929 13:09:34.484024       1 config.go:315] "Starting node config controller"
	I0929 13:09:34.484086       1 config.go:188] "Starting service config controller"
	I0929 13:09:34.484129       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0929 13:09:34.484044       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0929 13:09:34.584441       1 shared_informer.go:318] Caches are synced for service config
	I0929 13:09:34.584546       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0929 13:09:34.584980       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [e08df30dda564a006470ec044cfc6afc64a3beea8f1fa3c3115860dc7abcc524] <==
	I0929 13:08:41.028011       1 server_others.go:69] "Using iptables proxy"
	I0929 13:08:41.037778       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I0929 13:08:41.060348       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 13:08:41.063141       1 server_others.go:152] "Using iptables Proxier"
	I0929 13:08:41.063187       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0929 13:08:41.063196       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0929 13:08:41.063241       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0929 13:08:41.063476       1 server.go:846] "Version info" version="v1.28.0"
	I0929 13:08:41.063491       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:08:41.064165       1 config.go:97] "Starting endpoint slice config controller"
	I0929 13:08:41.064255       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0929 13:08:41.064295       1 config.go:188] "Starting service config controller"
	I0929 13:08:41.064326       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0929 13:08:41.065673       1 config.go:315] "Starting node config controller"
	I0929 13:08:41.065698       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0929 13:08:41.164574       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0929 13:08:41.165794       1 shared_informer.go:318] Caches are synced for node config
	I0929 13:08:41.165821       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [17aad6d9a070d7152eb3c4cde533105a2245fe898116322557bcf9e72f1c9c09] <==
	E0929 13:08:23.485452       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0929 13:08:23.485475       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0929 13:08:23.485491       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0929 13:08:23.485241       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0929 13:08:23.485558       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0929 13:08:23.485175       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0929 13:08:23.485587       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0929 13:08:23.485523       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0929 13:08:23.485611       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0929 13:08:23.485627       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0929 13:08:23.485533       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0929 13:08:23.485653       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0929 13:08:24.306272       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0929 13:08:24.306307       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0929 13:08:24.415289       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0929 13:08:24.415334       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0929 13:08:24.457854       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0929 13:08:24.457899       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 13:08:24.526173       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0929 13:08:24.526213       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0929 13:08:24.606272       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0929 13:08:24.606310       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0929 13:08:24.635937       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0929 13:08:24.636013       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0929 13:08:26.482542       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [8d916df956e81739bced98cd9c6200c78b40ecdf9581e3dd987ac8c0f0d40caa] <==
	I0929 13:09:30.863173       1 serving.go:348] Generated self-signed cert in-memory
	W0929 13:09:32.962668       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 13:09:32.962804       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W0929 13:09:32.962864       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 13:09:32.962900       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 13:09:32.990773       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I0929 13:09:32.990865       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:09:32.992728       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:09:32.992784       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0929 13:09:32.993803       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0929 13:09:32.993899       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0929 13:09:33.093190       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 29 13:26:53 old-k8s-version-495121 kubelet[611]: E0929 13:26:53.639392     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-r4kbj" podUID="60e7b4a4-451c-4c51-ae68-8a626ab1e1a7"
	Sep 29 13:27:03 old-k8s-version-495121 kubelet[611]: I0929 13:27:03.639053     611 scope.go:117] "RemoveContainer" containerID="f2a222fea451954184c6e7e91d00c517086d018b29b84ee9d530d99b40f9c442"
	Sep 29 13:27:03 old-k8s-version-495121 kubelet[611]: E0929 13:27:03.639452     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-5f27f_kubernetes-dashboard(7eaf1de6-b6e8-45f6-a547-6633ab9ee5a4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5f27f" podUID="7eaf1de6-b6e8-45f6-a547-6633ab9ee5a4"
	Sep 29 13:27:06 old-k8s-version-495121 kubelet[611]: E0929 13:27:06.639526     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-t2mql" podUID="993269b1-5535-4e85-a7c1-3f23bd738880"
	Sep 29 13:27:07 old-k8s-version-495121 kubelet[611]: E0929 13:27:07.639361     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-r4kbj" podUID="60e7b4a4-451c-4c51-ae68-8a626ab1e1a7"
	Sep 29 13:27:18 old-k8s-version-495121 kubelet[611]: I0929 13:27:18.638304     611 scope.go:117] "RemoveContainer" containerID="f2a222fea451954184c6e7e91d00c517086d018b29b84ee9d530d99b40f9c442"
	Sep 29 13:27:18 old-k8s-version-495121 kubelet[611]: E0929 13:27:18.638697     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-5f27f_kubernetes-dashboard(7eaf1de6-b6e8-45f6-a547-6633ab9ee5a4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5f27f" podUID="7eaf1de6-b6e8-45f6-a547-6633ab9ee5a4"
	Sep 29 13:27:18 old-k8s-version-495121 kubelet[611]: E0929 13:27:18.639288     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-r4kbj" podUID="60e7b4a4-451c-4c51-ae68-8a626ab1e1a7"
	Sep 29 13:27:18 old-k8s-version-495121 kubelet[611]: E0929 13:27:18.639288     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-t2mql" podUID="993269b1-5535-4e85-a7c1-3f23bd738880"
	Sep 29 13:27:29 old-k8s-version-495121 kubelet[611]: E0929 13:27:29.639684     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-t2mql" podUID="993269b1-5535-4e85-a7c1-3f23bd738880"
	Sep 29 13:27:30 old-k8s-version-495121 kubelet[611]: E0929 13:27:30.638985     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-r4kbj" podUID="60e7b4a4-451c-4c51-ae68-8a626ab1e1a7"
	Sep 29 13:27:33 old-k8s-version-495121 kubelet[611]: I0929 13:27:33.639247     611 scope.go:117] "RemoveContainer" containerID="f2a222fea451954184c6e7e91d00c517086d018b29b84ee9d530d99b40f9c442"
	Sep 29 13:27:33 old-k8s-version-495121 kubelet[611]: E0929 13:27:33.639649     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-5f27f_kubernetes-dashboard(7eaf1de6-b6e8-45f6-a547-6633ab9ee5a4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5f27f" podUID="7eaf1de6-b6e8-45f6-a547-6633ab9ee5a4"
	Sep 29 13:27:41 old-k8s-version-495121 kubelet[611]: E0929 13:27:41.639935     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-r4kbj" podUID="60e7b4a4-451c-4c51-ae68-8a626ab1e1a7"
	Sep 29 13:27:44 old-k8s-version-495121 kubelet[611]: E0929 13:27:44.639533     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-t2mql" podUID="993269b1-5535-4e85-a7c1-3f23bd738880"
	Sep 29 13:27:45 old-k8s-version-495121 kubelet[611]: I0929 13:27:45.639196     611 scope.go:117] "RemoveContainer" containerID="f2a222fea451954184c6e7e91d00c517086d018b29b84ee9d530d99b40f9c442"
	Sep 29 13:27:45 old-k8s-version-495121 kubelet[611]: E0929 13:27:45.639487     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-5f27f_kubernetes-dashboard(7eaf1de6-b6e8-45f6-a547-6633ab9ee5a4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5f27f" podUID="7eaf1de6-b6e8-45f6-a547-6633ab9ee5a4"
	Sep 29 13:27:55 old-k8s-version-495121 kubelet[611]: E0929 13:27:55.639107     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-t2mql" podUID="993269b1-5535-4e85-a7c1-3f23bd738880"
	Sep 29 13:27:55 old-k8s-version-495121 kubelet[611]: E0929 13:27:55.639111     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-r4kbj" podUID="60e7b4a4-451c-4c51-ae68-8a626ab1e1a7"
	Sep 29 13:27:57 old-k8s-version-495121 kubelet[611]: I0929 13:27:57.638687     611 scope.go:117] "RemoveContainer" containerID="f2a222fea451954184c6e7e91d00c517086d018b29b84ee9d530d99b40f9c442"
	Sep 29 13:27:57 old-k8s-version-495121 kubelet[611]: E0929 13:27:57.638935     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-5f27f_kubernetes-dashboard(7eaf1de6-b6e8-45f6-a547-6633ab9ee5a4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5f27f" podUID="7eaf1de6-b6e8-45f6-a547-6633ab9ee5a4"
	Sep 29 13:28:08 old-k8s-version-495121 kubelet[611]: E0929 13:28:08.639326     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-t2mql" podUID="993269b1-5535-4e85-a7c1-3f23bd738880"
	Sep 29 13:28:08 old-k8s-version-495121 kubelet[611]: E0929 13:28:08.639386     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-r4kbj" podUID="60e7b4a4-451c-4c51-ae68-8a626ab1e1a7"
	Sep 29 13:28:10 old-k8s-version-495121 kubelet[611]: I0929 13:28:10.639302     611 scope.go:117] "RemoveContainer" containerID="f2a222fea451954184c6e7e91d00c517086d018b29b84ee9d530d99b40f9c442"
	Sep 29 13:28:10 old-k8s-version-495121 kubelet[611]: E0929 13:28:10.639713     611 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-5f27f_kubernetes-dashboard(7eaf1de6-b6e8-45f6-a547-6633ab9ee5a4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5f27f" podUID="7eaf1de6-b6e8-45f6-a547-6633ab9ee5a4"
	
	
	==> storage-provisioner [751d011dd2b4f7bfde9c11190bfefc1f4af16becd517aa7aef474ca37e4713a9] <==
	I0929 13:09:34.464775       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 13:10:04.468263       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a038fdcf70d29567f594b699e32c1ff935cbb3227c70709b73bc8579cb052b3b] <==
	I0929 13:10:18.722633       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0929 13:10:18.731313       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0929 13:10:18.731354       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0929 13:10:36.129377       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0929 13:10:36.129568       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-495121_0e869ed1-b387-4391-9538-34bd9f4b72bb!
	I0929 13:10:36.129532       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"99a8355f-5957-46bd-a556-a580d532ae77", APIVersion:"v1", ResourceVersion:"714", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-495121_0e869ed1-b387-4391-9538-34bd9f4b72bb became leader
	I0929 13:10:36.229884       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-495121_0e869ed1-b387-4391-9538-34bd9f4b72bb!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-495121 -n old-k8s-version-495121
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-495121 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-57f55c9bc5-t2mql kubernetes-dashboard-8694d4445c-r4kbj
helpers_test.go:282: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-495121 describe pod metrics-server-57f55c9bc5-t2mql kubernetes-dashboard-8694d4445c-r4kbj
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-495121 describe pod metrics-server-57f55c9bc5-t2mql kubernetes-dashboard-8694d4445c-r4kbj: exit status 1 (59.10753ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-t2mql" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-r4kbj" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context old-k8s-version-495121 describe pod metrics-server-57f55c9bc5-t2mql kubernetes-dashboard-8694d4445c-r4kbj: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (542.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (542.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rfr6g" [93751269-985a-4d2f-9768-407c72ae300b] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0929 13:20:20.684257 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-644246 -n embed-certs-644246
start_stop_delete_test.go:285: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-29 13:28:26.814160477 +0000 UTC m=+4271.398183658
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context embed-certs-644246 describe po kubernetes-dashboard-855c9754f9-rfr6g -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context embed-certs-644246 describe po kubernetes-dashboard-855c9754f9-rfr6g -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-rfr6g
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             embed-certs-644246/192.168.103.2
Start Time:       Mon, 29 Sep 2025 13:09:52 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r9znr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-r9znr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rfr6g to embed-certs-644246
Warning  Failed     15m (x5 over 18m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     15m (x5 over 18m)     kubelet            Error: ErrImagePull
Normal   Pulling    12m (x6 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Normal   BackOff    3m22s (x63 over 18m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     3m22s (x63 over 18m)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context embed-certs-644246 logs kubernetes-dashboard-855c9754f9-rfr6g -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-644246 logs kubernetes-dashboard-855c9754f9-rfr6g -n kubernetes-dashboard: exit status 1 (73.039528ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-rfr6g" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context embed-certs-644246 logs kubernetes-dashboard-855c9754f9-rfr6g -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-644246 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-644246
helpers_test.go:243: (dbg) docker inspect embed-certs-644246:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8a578dd7f9fae93da8cc4d478e48cf195e16fd681586946c95adad77159e0c45",
	        "Created": "2025-09-29T13:08:39.203437343Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1425320,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T13:09:39.085003932Z",
	            "FinishedAt": "2025-09-29T13:09:38.261763457Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/8a578dd7f9fae93da8cc4d478e48cf195e16fd681586946c95adad77159e0c45/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8a578dd7f9fae93da8cc4d478e48cf195e16fd681586946c95adad77159e0c45/hostname",
	        "HostsPath": "/var/lib/docker/containers/8a578dd7f9fae93da8cc4d478e48cf195e16fd681586946c95adad77159e0c45/hosts",
	        "LogPath": "/var/lib/docker/containers/8a578dd7f9fae93da8cc4d478e48cf195e16fd681586946c95adad77159e0c45/8a578dd7f9fae93da8cc4d478e48cf195e16fd681586946c95adad77159e0c45-json.log",
	        "Name": "/embed-certs-644246",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-644246:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-644246",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8a578dd7f9fae93da8cc4d478e48cf195e16fd681586946c95adad77159e0c45",
	                "LowerDir": "/var/lib/docker/overlay2/91436ff8e9a5aab2a206215824f18fd75369454e6d32e8226161eb99175b60de-init/diff:/var/lib/docker/overlay2/fbd0ff8837aea1062458ef3b6c2ff01f7caaf77470820d108a1f7ca188c98aa7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/91436ff8e9a5aab2a206215824f18fd75369454e6d32e8226161eb99175b60de/merged",
	                "UpperDir": "/var/lib/docker/overlay2/91436ff8e9a5aab2a206215824f18fd75369454e6d32e8226161eb99175b60de/diff",
	                "WorkDir": "/var/lib/docker/overlay2/91436ff8e9a5aab2a206215824f18fd75369454e6d32e8226161eb99175b60de/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-644246",
	                "Source": "/var/lib/docker/volumes/embed-certs-644246/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-644246",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-644246",
	                "name.minikube.sigs.k8s.io": "embed-certs-644246",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ed260214a0ab2fdd1689f64c200597a193992633e9689ea01a8bcc875fa1f3e9",
	            "SandboxKey": "/var/run/docker/netns/ed260214a0ab",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33606"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33607"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33610"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33608"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33609"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-644246": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:dc:72:bf:a7:95",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4aa0b62087589daf1b8a6964a178a4f629a9ee977cbac243d7975189f723e4fb",
	                    "EndpointID": "1325dd8031237a00772a0b4cf5a9a39052e4518aca1060d9c1e37c2d9df360d8",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-644246",
	                        "8a578dd7f9fa"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-644246 -n embed-certs-644246
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-644246 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-644246 logs -n 25: (1.590512536s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ ssh     │ -p calico-321209 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ ssh     │ -p calico-321209 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │                     │
	│ ssh     │ -p calico-321209 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ ssh     │ -p calico-321209 sudo cri-dockerd --version                                                                                                                                                                                                         │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ ssh     │ -p calico-321209 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ ssh     │ -p calico-321209 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ ssh     │ -p calico-321209 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ ssh     │ -p calico-321209 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ ssh     │ -p calico-321209 sudo containerd config dump                                                                                                                                                                                                        │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ ssh     │ -p calico-321209 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │                     │
	│ ssh     │ -p calico-321209 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ ssh     │ -p calico-321209 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ ssh     │ -p calico-321209 sudo crio config                                                                                                                                                                                                                   │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ delete  │ -p calico-321209                                                                                                                                                                                                                                    │ calico-321209                │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ start   │ -p default-k8s-diff-port-625526 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-625526 │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-625526 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ default-k8s-diff-port-625526 │ jenkins │ v1.37.0 │ 29 Sep 25 13:22 UTC │ 29 Sep 25 13:22 UTC │
	│ stop    │ -p default-k8s-diff-port-625526 --alsologtostderr -v=3                                                                                                                                                                                              │ default-k8s-diff-port-625526 │ jenkins │ v1.37.0 │ 29 Sep 25 13:22 UTC │ 29 Sep 25 13:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-625526 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ default-k8s-diff-port-625526 │ jenkins │ v1.37.0 │ 29 Sep 25 13:22 UTC │ 29 Sep 25 13:22 UTC │
	│ start   │ -p default-k8s-diff-port-625526 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-625526 │ jenkins │ v1.37.0 │ 29 Sep 25 13:22 UTC │ 29 Sep 25 13:23 UTC │
	│ image   │ old-k8s-version-495121 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-495121       │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ pause   │ -p old-k8s-version-495121 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-495121       │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ unpause │ -p old-k8s-version-495121 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-495121       │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ delete  │ -p old-k8s-version-495121                                                                                                                                                                                                                           │ old-k8s-version-495121       │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ delete  │ -p old-k8s-version-495121                                                                                                                                                                                                                           │ old-k8s-version-495121       │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ start   │ -p newest-cni-740698 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0 │ newest-cni-740698            │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 13:28:18
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 13:28:18.297897 1459062 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:28:18.298062 1459062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:28:18.298076 1459062 out.go:374] Setting ErrFile to fd 2...
	I0929 13:28:18.298083 1459062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:28:18.298285 1459062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1097891/.minikube/bin
	I0929 13:28:18.298782 1459062 out.go:368] Setting JSON to false
	I0929 13:28:18.300216 1459062 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":22235,"bootTime":1759130263,"procs":322,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 13:28:18.300330 1459062 start.go:140] virtualization: kvm guest
	I0929 13:28:18.302676 1459062 out.go:179] * [newest-cni-740698] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 13:28:18.303909 1459062 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 13:28:18.303955 1459062 notify.go:220] Checking for updates...
	I0929 13:28:18.305796 1459062 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 13:28:18.306781 1459062 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 13:28:18.307657 1459062 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1097891/.minikube
	I0929 13:28:18.308532 1459062 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 13:28:18.309353 1459062 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 13:28:18.310600 1459062 config.go:182] Loaded profile config "default-k8s-diff-port-625526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 13:28:18.310693 1459062 config.go:182] Loaded profile config "embed-certs-644246": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 13:28:18.310776 1459062 config.go:182] Loaded profile config "no-preload-554589": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 13:28:18.310861 1459062 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 13:28:18.335380 1459062 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 13:28:18.335468 1459062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:28:18.393442 1459062 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 13:28:18.383097834 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:28:18.393581 1459062 docker.go:318] overlay module found
	I0929 13:28:18.395550 1459062 out.go:179] * Using the docker driver based on user configuration
	I0929 13:28:18.396642 1459062 start.go:304] selected driver: docker
	I0929 13:28:18.396658 1459062 start.go:924] validating driver "docker" against <nil>
	I0929 13:28:18.396670 1459062 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 13:28:18.397344 1459062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:28:18.453083 1459062 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 13:28:18.442683035 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:28:18.453256 1459062 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W0929 13:28:18.453285 1459062 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0929 13:28:18.453532 1459062 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0929 13:28:18.455259 1459062 out.go:179] * Using Docker driver with root privileges
	I0929 13:28:18.456076 1459062 cni.go:84] Creating CNI manager for ""
	I0929 13:28:18.456156 1459062 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0929 13:28:18.456171 1459062 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0929 13:28:18.456272 1459062 start.go:348] cluster config:
	{Name:newest-cni-740698 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-740698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:28:18.457290 1459062 out.go:179] * Starting "newest-cni-740698" primary control-plane node in "newest-cni-740698" cluster
	I0929 13:28:18.458174 1459062 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0929 13:28:18.459070 1459062 out.go:179] * Pulling base image v0.0.48 ...
	I0929 13:28:18.459861 1459062 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0929 13:28:18.459901 1459062 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 13:28:18.459906 1459062 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0929 13:28:18.459936 1459062 cache.go:58] Caching tarball of preloaded images
	I0929 13:28:18.460042 1459062 preload.go:172] Found /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0929 13:28:18.460053 1459062 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0929 13:28:18.460175 1459062 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698/config.json ...
	I0929 13:28:18.460202 1459062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698/config.json: {Name:mk12de801150b140061fcfe5a9d975179b9167db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:28:18.480781 1459062 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 13:28:18.480803 1459062 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 13:28:18.480819 1459062 cache.go:232] Successfully downloaded all kic artifacts
	I0929 13:28:18.480852 1459062 start.go:360] acquireMachinesLock for newest-cni-740698: {Name:mkf40a81be102ef43d2455f2435b32c6c1c894a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:28:18.480983 1459062 start.go:364] duration metric: took 109.454µs to acquireMachinesLock for "newest-cni-740698"
	I0929 13:28:18.481027 1459062 start.go:93] Provisioning new machine with config: &{Name:newest-cni-740698 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-740698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0929 13:28:18.481170 1459062 start.go:125] createHost starting for "" (driver="docker")
	I0929 13:28:18.483404 1459062 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0929 13:28:18.483661 1459062 start.go:159] libmachine.API.Create for "newest-cni-740698" (driver="docker")
	I0929 13:28:18.483716 1459062 client.go:168] LocalClient.Create starting
	I0929 13:28:18.483827 1459062 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem
	I0929 13:28:18.483864 1459062 main.go:141] libmachine: Decoding PEM data...
	I0929 13:28:18.483880 1459062 main.go:141] libmachine: Parsing certificate...
	I0929 13:28:18.483942 1459062 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem
	I0929 13:28:18.483992 1459062 main.go:141] libmachine: Decoding PEM data...
	I0929 13:28:18.484011 1459062 main.go:141] libmachine: Parsing certificate...
	I0929 13:28:18.484393 1459062 cli_runner.go:164] Run: docker network inspect newest-cni-740698 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0929 13:28:18.503232 1459062 cli_runner.go:211] docker network inspect newest-cni-740698 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0929 13:28:18.503320 1459062 network_create.go:284] running [docker network inspect newest-cni-740698] to gather additional debugging logs...
	I0929 13:28:18.503347 1459062 cli_runner.go:164] Run: docker network inspect newest-cni-740698
	W0929 13:28:18.521284 1459062 cli_runner.go:211] docker network inspect newest-cni-740698 returned with exit code 1
	I0929 13:28:18.521322 1459062 network_create.go:287] error running [docker network inspect newest-cni-740698]: docker network inspect newest-cni-740698: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-740698 not found
	I0929 13:28:18.521336 1459062 network_create.go:289] output of [docker network inspect newest-cni-740698]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-740698 not found
	
	** /stderr **
	I0929 13:28:18.521489 1459062 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:28:18.538651 1459062 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ea048bcecb48 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fe:2d:df:61:03:8a} reservation:<nil>}
	I0929 13:28:18.539247 1459062 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1bd167e5ce7a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:0f:ec:5b:6d:8a} reservation:<nil>}
	I0929 13:28:18.540037 1459062 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-29d6980ca283 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1e:24:81:41:84:f3} reservation:<nil>}
	I0929 13:28:18.540949 1459062 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-95f7e8c85414 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:de:c6:f0:e9:56:ba} reservation:<nil>}
	I0929 13:28:18.541987 1459062 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cafb50}
	I0929 13:28:18.542016 1459062 network_create.go:124] attempt to create docker network newest-cni-740698 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0929 13:28:18.542071 1459062 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-740698 newest-cni-740698
	I0929 13:28:18.601428 1459062 network_create.go:108] docker network newest-cni-740698 192.168.85.0/24 created
	I0929 13:28:18.601460 1459062 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-740698" container
	I0929 13:28:18.601533 1459062 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 13:28:18.619942 1459062 cli_runner.go:164] Run: docker volume create newest-cni-740698 --label name.minikube.sigs.k8s.io=newest-cni-740698 --label created_by.minikube.sigs.k8s.io=true
	I0929 13:28:18.637779 1459062 oci.go:103] Successfully created a docker volume newest-cni-740698
	I0929 13:28:18.637860 1459062 cli_runner.go:164] Run: docker run --rm --name newest-cni-740698-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-740698 --entrypoint /usr/bin/test -v newest-cni-740698:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 13:28:19.020756 1459062 oci.go:107] Successfully prepared a docker volume newest-cni-740698
	I0929 13:28:19.020802 1459062 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0929 13:28:19.020824 1459062 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 13:28:19.020917 1459062 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-740698:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	9ee978d11402f       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   8                   3882b3dba6189       dashboard-metrics-scraper-6ffb444bf9-wfz7q
	89d5dd60e7ea9       6e38f40d628db       17 minutes ago      Running             storage-provisioner         2                   040df12ea072b       storage-provisioner
	1bd11e57da3f5       409467f978b4a       18 minutes ago      Running             kindnet-cni                 1                   4304b56f80f6a       kindnet-bmw79
	23057c561f919       52546a367cc9e       18 minutes ago      Running             coredns                     1                   bcc830925fbde       coredns-66bc5c9577-7ks4q
	5edfb59ce5bfd       56cc512116c8f       18 minutes ago      Running             busybox                     1                   5bcfcf5204b14       busybox
	41d326ca33119       6e38f40d628db       18 minutes ago      Exited              storage-provisioner         1                   040df12ea072b       storage-provisioner
	ad925575e8d3c       df0860106674d       18 minutes ago      Running             kube-proxy                  1                   188acd4aa524a       kube-proxy-lg9p7
	57d2a9624e102       90550c43ad2bc       18 minutes ago      Running             kube-apiserver              1                   8234959642546       kube-apiserver-embed-certs-644246
	2cf5ae260ab08       a0af72f2ec6d6       18 minutes ago      Running             kube-controller-manager     1                   b42b761142114       kube-controller-manager-embed-certs-644246
	2c5e8c1c3ba13       5f1f5298c888d       18 minutes ago      Running             etcd                        1                   8adfdcbd113ce       etcd-embed-certs-644246
	e7e3c0e682ae2       46169d968e920       18 minutes ago      Running             kube-scheduler              1                   328268537f830       kube-scheduler-embed-certs-644246
	c8e5525bd40e7       56cc512116c8f       19 minutes ago      Exited              busybox                     0                   bcabf760c3cba       busybox
	69d4c35cd6401       52546a367cc9e       19 minutes ago      Exited              coredns                     0                   cc957c77e1525       coredns-66bc5c9577-7ks4q
	1fc235f0560df       409467f978b4a       19 minutes ago      Exited              kindnet-cni                 0                   3fb409eba6f0d       kindnet-bmw79
	e6b423e6a31e5       df0860106674d       19 minutes ago      Exited              kube-proxy                  0                   66e86b84e404c       kube-proxy-lg9p7
	fae72d184a31a       5f1f5298c888d       19 minutes ago      Exited              etcd                        0                   ae2c618ac3054       etcd-embed-certs-644246
	57e6ad38567b4       a0af72f2ec6d6       19 minutes ago      Exited              kube-controller-manager     0                   ebd534628011d       kube-controller-manager-embed-certs-644246
	ee3dd7dd9a85b       46169d968e920       19 minutes ago      Exited              kube-scheduler              0                   5ef1b712437e4       kube-scheduler-embed-certs-644246
	b3e4f310cb1d2       90550c43ad2bc       19 minutes ago      Exited              kube-apiserver              0                   7289aadb8d395       kube-apiserver-embed-certs-644246
	
	
	==> containerd <==
	Sep 29 13:20:58 embed-certs-644246 containerd[476]: time="2025-09-29T13:20:58.359523647Z" level=info msg="received exit event container_id:\"9d53dd0da97f2a8db0cc0ac85bb853e9477d3f3e798eee4ca8bd8e4aca90b6c2\"  id:\"9d53dd0da97f2a8db0cc0ac85bb853e9477d3f3e798eee4ca8bd8e4aca90b6c2\"  pid:3265  exit_status:1  exited_at:{seconds:1759152058  nanos:359265872}"
	Sep 29 13:20:58 embed-certs-644246 containerd[476]: time="2025-09-29T13:20:58.382253605Z" level=info msg="shim disconnected" id=9d53dd0da97f2a8db0cc0ac85bb853e9477d3f3e798eee4ca8bd8e4aca90b6c2 namespace=k8s.io
	Sep 29 13:20:58 embed-certs-644246 containerd[476]: time="2025-09-29T13:20:58.382302252Z" level=warning msg="cleaning up after shim disconnected" id=9d53dd0da97f2a8db0cc0ac85bb853e9477d3f3e798eee4ca8bd8e4aca90b6c2 namespace=k8s.io
	Sep 29 13:20:58 embed-certs-644246 containerd[476]: time="2025-09-29T13:20:58.382315548Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 29 13:20:59 embed-certs-644246 containerd[476]: time="2025-09-29T13:20:59.050281421Z" level=info msg="RemoveContainer for \"0c05d2c89bc93dc391bce73b7b40b3a1d2471d989dd103b211499e90fd9a0536\""
	Sep 29 13:20:59 embed-certs-644246 containerd[476]: time="2025-09-29T13:20:59.054599607Z" level=info msg="RemoveContainer for \"0c05d2c89bc93dc391bce73b7b40b3a1d2471d989dd103b211499e90fd9a0536\" returns successfully"
	Sep 29 13:25:32 embed-certs-644246 containerd[476]: time="2025-09-29T13:25:32.276777501Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 29 13:25:32 embed-certs-644246 containerd[476]: time="2025-09-29T13:25:32.339205025Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host" host=fake.domain
	Sep 29 13:25:32 embed-certs-644246 containerd[476]: time="2025-09-29T13:25:32.340453264Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 29 13:25:32 embed-certs-644246 containerd[476]: time="2025-09-29T13:25:32.340517249Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 29 13:25:46 embed-certs-644246 containerd[476]: time="2025-09-29T13:25:46.277743198Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 29 13:25:46 embed-certs-644246 containerd[476]: time="2025-09-29T13:25:46.279456750Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 13:25:46 embed-certs-644246 containerd[476]: time="2025-09-29T13:25:46.937050774Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 13:25:48 embed-certs-644246 containerd[476]: time="2025-09-29T13:25:48.802519119Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 13:25:48 embed-certs-644246 containerd[476]: time="2025-09-29T13:25:48.802568866Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Sep 29 13:26:01 embed-certs-644246 containerd[476]: time="2025-09-29T13:26:01.281780514Z" level=info msg="CreateContainer within sandbox \"3882b3dba6189bc071bb9a3af95578e663e4f7d6dbc7cd79e40db9fca4dc8f26\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,}"
	Sep 29 13:26:01 embed-certs-644246 containerd[476]: time="2025-09-29T13:26:01.290111999Z" level=info msg="CreateContainer within sandbox \"3882b3dba6189bc071bb9a3af95578e663e4f7d6dbc7cd79e40db9fca4dc8f26\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"9ee978d11402f0a84191126de64ca8bc1588bbcd8e025f2123de58a16fcfadf2\""
	Sep 29 13:26:01 embed-certs-644246 containerd[476]: time="2025-09-29T13:26:01.290643609Z" level=info msg="StartContainer for \"9ee978d11402f0a84191126de64ca8bc1588bbcd8e025f2123de58a16fcfadf2\""
	Sep 29 13:26:01 embed-certs-644246 containerd[476]: time="2025-09-29T13:26:01.345294885Z" level=info msg="StartContainer for \"9ee978d11402f0a84191126de64ca8bc1588bbcd8e025f2123de58a16fcfadf2\" returns successfully"
	Sep 29 13:26:01 embed-certs-644246 containerd[476]: time="2025-09-29T13:26:01.360931631Z" level=info msg="received exit event container_id:\"9ee978d11402f0a84191126de64ca8bc1588bbcd8e025f2123de58a16fcfadf2\"  id:\"9ee978d11402f0a84191126de64ca8bc1588bbcd8e025f2123de58a16fcfadf2\"  pid:3372  exit_status:1  exited_at:{seconds:1759152361  nanos:360450930}"
	Sep 29 13:26:01 embed-certs-644246 containerd[476]: time="2025-09-29T13:26:01.382844190Z" level=info msg="shim disconnected" id=9ee978d11402f0a84191126de64ca8bc1588bbcd8e025f2123de58a16fcfadf2 namespace=k8s.io
	Sep 29 13:26:01 embed-certs-644246 containerd[476]: time="2025-09-29T13:26:01.382895353Z" level=warning msg="cleaning up after shim disconnected" id=9ee978d11402f0a84191126de64ca8bc1588bbcd8e025f2123de58a16fcfadf2 namespace=k8s.io
	Sep 29 13:26:01 embed-certs-644246 containerd[476]: time="2025-09-29T13:26:01.382910173Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 29 13:26:01 embed-certs-644246 containerd[476]: time="2025-09-29T13:26:01.803070435Z" level=info msg="RemoveContainer for \"9d53dd0da97f2a8db0cc0ac85bb853e9477d3f3e798eee4ca8bd8e4aca90b6c2\""
	Sep 29 13:26:01 embed-certs-644246 containerd[476]: time="2025-09-29T13:26:01.807253221Z" level=info msg="RemoveContainer for \"9d53dd0da97f2a8db0cc0ac85bb853e9477d3f3e798eee4ca8bd8e4aca90b6c2\" returns successfully"
	
	
	==> coredns [23057c561f919b94945c5b05eb31b16450ca88a3241da82394cad4ee1da8c20a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35327 - 38432 "HINFO IN 4399475585571759701.8623236205832913163. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022894898s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [69d4c35cd64012132a0644040388e5fa4c203fc451def79d6cc0efd90d7ccd30] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36543 - 35168 "HINFO IN 957827576055723592.8006585472796840795. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.026866297s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               embed-certs-644246
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-644246
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=embed-certs-644246
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T13_08_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 13:08:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-644246
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 13:28:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 13:26:29 +0000   Mon, 29 Sep 2025 13:08:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 13:26:29 +0000   Mon, 29 Sep 2025 13:08:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 13:26:29 +0000   Mon, 29 Sep 2025 13:08:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 13:26:29 +0000   Mon, 29 Sep 2025 13:08:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-644246
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 1d0d4c109c85432e8db15932f17bb840
	  System UUID:                65c9402f-6de5-41ae-a2fe-c7db7f885c6a
	  Boot ID:                    c950b162-3ea4-4410-8c2e-1238f18b29b9
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-7ks4q                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-embed-certs-644246                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-bmw79                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-embed-certs-644246             250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-embed-certs-644246    200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-lg9p7                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-embed-certs-644246             100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-746fcd58dc-mt8dc               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         19m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-wfz7q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-rfr6g         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node embed-certs-644246 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node embed-certs-644246 status is now: NodeHasSufficientMemory
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node embed-certs-644246 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     19m                kubelet          Node embed-certs-644246 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node embed-certs-644246 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node embed-certs-644246 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           19m                node-controller  Node embed-certs-644246 event: Registered Node embed-certs-644246 in Controller
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node embed-certs-644246 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node embed-certs-644246 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node embed-certs-644246 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node embed-certs-644246 event: Registered Node embed-certs-644246 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 a1 f4 28 81 a8 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2e 2f bb 72 d0 bd 08 06
	[  +6.778142] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 83 71 a8 41 1d 08 06
	[  +0.096747] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 43 49 e5 fd fa 08 06
	[Sep29 13:07] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 2d 17 7b b6 88 08 06
	[  +0.000371] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5a 43 49 e5 fd fa 08 06
	[ +37.870699] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 36 61 5e 36 d0 11 08 06
	[Sep29 13:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 3c ea 5f b8 68 08 06
	[  +0.009082] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 a0 7d 1d f4 ea 08 06
	[ +10.861380] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 60 01 bb bd e5 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 61 5e 36 d0 11 08 06
	[ +36.402844] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 73 32 f4 f1 e6 08 06
	[  +0.000316] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 3c ea 5f b8 68 08 06
	
	
	==> etcd [2c5e8c1c3ba1329d5deb104ffe4a123648580a6355cce5276dd514d7d91b4f82] <==
	{"level":"warn","ts":"2025-09-29T13:09:47.480809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.489067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.497725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.505926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.514176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.524925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.530547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.538816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.546913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.570927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.578682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.586324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.597361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.605168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.612722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:47.665180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:49.820935Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"178.831879ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T13:09:49.821043Z","caller":"traceutil/trace.go:172","msg":"trace[272780333] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:557; }","duration":"178.963086ms","start":"2025-09-29T13:09:49.642069Z","end":"2025-09-29T13:09:49.821032Z","steps":["trace[272780333] 'range keys from in-memory index tree'  (duration: 178.751475ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T13:19:47.036471Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1071}
	{"level":"info","ts":"2025-09-29T13:19:47.054729Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1071,"took":"17.938544ms","hash":3972174025,"current-db-size-bytes":3244032,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1343488,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2025-09-29T13:19:47.054769Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3972174025,"revision":1071,"compact-revision":-1}
	{"level":"warn","ts":"2025-09-29T13:21:35.074409Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.261417ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873788994869720254 > lease_revoke:<id:4089999597eef459>","response":"size:28"}
	{"level":"info","ts":"2025-09-29T13:24:47.041195Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1330}
	{"level":"info","ts":"2025-09-29T13:24:47.043771Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1330,"took":"2.238406ms","hash":2183421300,"current-db-size-bytes":3244032,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1798144,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-09-29T13:24:47.043810Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2183421300,"revision":1330,"compact-revision":1071}
	
	
	==> etcd [fae72d184a31a651fed9071d2b9c7e5800c30e9d2e02171a24443d036ad0e6c3] <==
	{"level":"warn","ts":"2025-09-29T13:08:50.511694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.521081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.529337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.536384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.542787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.549574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.557398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.564507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.571583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.578767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.585307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.592220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.607055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.614655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.622740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.629787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.637851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.644435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.652831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.660173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.666618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.677017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.683895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.690728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:08:50.737606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36466","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:28:28 up  6:10,  0 users,  load average: 0.96, 0.64, 1.10
	Linux embed-certs-644246 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [1bd11e57da3f5af163b1ef12b74b2f556d41b6b0dd934ef71a46920d653881db] <==
	I0929 13:26:19.777138       1 main.go:301] handling current node
	I0929 13:26:29.776272       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:26:29.776309       1 main.go:301] handling current node
	I0929 13:26:39.785053       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:26:39.785092       1 main.go:301] handling current node
	I0929 13:26:49.779154       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:26:49.779183       1 main.go:301] handling current node
	I0929 13:26:59.776547       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:26:59.776586       1 main.go:301] handling current node
	I0929 13:27:09.785402       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:27:09.785433       1 main.go:301] handling current node
	I0929 13:27:19.777358       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:27:19.777399       1 main.go:301] handling current node
	I0929 13:27:29.778796       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:27:29.778837       1 main.go:301] handling current node
	I0929 13:27:39.784605       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:27:39.784636       1 main.go:301] handling current node
	I0929 13:27:49.778033       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:27:49.778085       1 main.go:301] handling current node
	I0929 13:27:59.778795       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:27:59.778826       1 main.go:301] handling current node
	I0929 13:28:09.784931       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:28:09.784981       1 main.go:301] handling current node
	I0929 13:28:19.778094       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:28:19.778140       1 main.go:301] handling current node
	
	
	==> kindnet [1fc235f0560df18bc1b28b2bc719561389fcc2648b4672c2df106c3f1e4ceea8] <==
	I0929 13:09:00.068714       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0929 13:09:00.068958       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I0929 13:09:00.069180       1 main.go:148] setting mtu 1500 for CNI 
	I0929 13:09:00.069198       1 main.go:178] kindnetd IP family: "ipv4"
	I0929 13:09:00.069221       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-29T13:09:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0929 13:09:00.290732       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0929 13:09:00.290757       1 controller.go:381] "Waiting for informer caches to sync"
	I0929 13:09:00.290769       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0929 13:09:00.291785       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0929 13:09:00.765003       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0929 13:09:00.765141       1 metrics.go:72] Registering metrics
	I0929 13:09:00.765564       1 controller.go:711] "Syncing nftables rules"
	I0929 13:09:10.295074       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:09:10.295153       1 main.go:301] handling current node
	I0929 13:09:20.291036       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 13:09:20.291067       1 main.go:301] handling current node
	
	
	==> kube-apiserver [57d2a9624e1027c381e850139a59371acba3ffe2995876f3c7ee92312c5ba2ec] <==
	I0929 13:24:49.276611       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:24:49.562019       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:25:05.352344       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 13:25:49.276381       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:25:49.276448       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 13:25:49.276467       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 13:25:49.277415       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:25:49.277464       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 13:25:49.277473       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:26:14.997779       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:26:25.743274       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:27:29.657592       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:27:48.245087       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 13:27:49.277089       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:27:49.277147       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 13:27:49.277164       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 13:27:49.278193       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:27:49.278295       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 13:27:49.278311       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [b3e4f310cb1d2ceba76acb2c895b2c16bae6c0352d218f9dcdca0ec6dddeb40a] <==
	I0929 13:08:53.838497       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0929 13:08:53.845771       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0929 13:08:58.884559       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 13:08:58.889636       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 13:08:58.934886       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0929 13:08:59.083365       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E0929 13:09:25.791022       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:34838: use of closed network connection
	I0929 13:09:26.452688       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0929 13:09:26.456493       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:09:26.456563       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0929 13:09:26.456638       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0929 13:09:26.540051       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.103.15.144"}
	W0929 13:09:26.545498       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:09:26.545572       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0929 13:09:26.550721       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:09:26.550776       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-controller-manager [2cf5ae260ab086a7858422032a71c2b1a2118a4cb6b2821658fcfe01935eb793] <==
	I0929 13:22:21.782214       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:22:51.710423       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:22:51.788804       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:23:21.714502       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:23:21.797175       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:23:51.719373       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:23:51.804175       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:24:21.723197       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:24:21.810714       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:24:51.727144       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:24:51.817426       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:25:21.730864       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:25:21.825129       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:25:51.735526       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:25:51.832267       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:26:21.740273       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:26:21.839478       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:26:51.744733       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:26:51.847500       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:27:21.749044       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:27:21.855752       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:27:51.752473       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:27:51.865498       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:28:21.756527       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:28:21.872458       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-controller-manager [57e6ad38567b4e1b04e678e744e5821d6d7f8bec8cb60f6881032eb0f2c10fc7] <==
	I0929 13:08:58.081763       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 13:08:58.085232       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0929 13:08:58.088776       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 13:08:58.127694       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0929 13:08:58.129161       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 13:08:58.129197       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0929 13:08:58.129230       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 13:08:58.129233       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 13:08:58.129283       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 13:08:58.129291       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 13:08:58.129506       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 13:08:58.129556       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0929 13:08:58.129586       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 13:08:58.130168       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 13:08:58.130296       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 13:08:58.130696       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 13:08:58.130922       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 13:08:58.131281       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 13:08:58.131352       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 13:08:58.132112       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 13:08:58.136059       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 13:08:58.137115       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0929 13:08:58.140291       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 13:08:58.143578       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0929 13:08:58.160163       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [ad925575e8d3c94f93c2cd600707ce56562d811d8418d2a7d9d44de283de1431] <==
	I0929 13:09:49.066752       1 server_linux.go:53] "Using iptables proxy"
	I0929 13:09:49.140973       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 13:09:49.241255       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 13:09:49.241316       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E0929 13:09:49.241409       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 13:09:49.268207       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 13:09:49.268280       1 server_linux.go:132] "Using iptables Proxier"
	I0929 13:09:49.274996       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 13:09:49.275468       1 server.go:527] "Version info" version="v1.34.0"
	I0929 13:09:49.275502       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:09:49.277449       1 config.go:200] "Starting service config controller"
	I0929 13:09:49.277466       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 13:09:49.277512       1 config.go:106] "Starting endpoint slice config controller"
	I0929 13:09:49.277527       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 13:09:49.277540       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 13:09:49.277546       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 13:09:49.277792       1 config.go:309] "Starting node config controller"
	I0929 13:09:49.277821       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 13:09:49.377717       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 13:09:49.377734       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 13:09:49.377755       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 13:09:49.378568       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [e6b423e6a31e527deb5d8b921af6f92ea464c39c704ed95442081d8698e6a6cd] <==
	I0929 13:08:59.730309       1 server_linux.go:53] "Using iptables proxy"
	I0929 13:08:59.792652       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 13:08:59.893637       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 13:08:59.893676       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E0929 13:08:59.893810       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 13:08:59.918932       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 13:08:59.919017       1 server_linux.go:132] "Using iptables Proxier"
	I0929 13:08:59.925750       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 13:08:59.926444       1 server.go:527] "Version info" version="v1.34.0"
	I0929 13:08:59.926474       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:08:59.927955       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 13:08:59.928002       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 13:08:59.928031       1 config.go:200] "Starting service config controller"
	I0929 13:08:59.928036       1 config.go:106] "Starting endpoint slice config controller"
	I0929 13:08:59.928057       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 13:08:59.928058       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 13:08:59.928387       1 config.go:309] "Starting node config controller"
	I0929 13:08:59.928400       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 13:08:59.928407       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 13:09:00.028519       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 13:09:00.028545       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 13:09:00.028519       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [e7e3c0e682ae22da885e0dbd49d91495430c1f695431e13b4e68584eafd38663] <==
	I0929 13:09:47.253664       1 serving.go:386] Generated self-signed cert in-memory
	W0929 13:09:48.248354       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 13:09:48.248509       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 13:09:48.248572       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 13:09:48.248597       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 13:09:48.288071       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 13:09:48.289154       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:09:48.295350       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:09:48.295590       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:09:48.297174       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 13:09:48.297265       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 13:09:48.395924       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [ee3dd7dd9a85bce92920f319eb88479792baeceb3c535dfa8680b81695bd5ba9] <==
	E0929 13:08:51.145957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 13:08:51.145903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 13:08:51.145998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 13:08:51.146063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 13:08:51.146048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 13:08:51.146094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 13:08:51.146190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 13:08:51.146475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 13:08:51.146484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 13:08:51.146517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 13:08:51.146549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 13:08:51.146581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 13:08:51.146613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 13:08:51.146610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 13:08:51.146626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 13:08:52.035305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 13:08:52.053704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 13:08:52.099932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 13:08:52.126979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 13:08:52.194786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0929 13:08:52.198855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 13:08:52.276367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 13:08:52.286380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 13:08:52.300453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I0929 13:08:54.343609       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 13:27:04 embed-certs-644246 kubelet[609]: E0929 13:27:04.277352     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-mt8dc" podUID="5510c590-a7e8-4010-9032-6f9073db87f9"
	Sep 29 13:27:12 embed-certs-644246 kubelet[609]: I0929 13:27:12.276078     609 scope.go:117] "RemoveContainer" containerID="9ee978d11402f0a84191126de64ca8bc1588bbcd8e025f2123de58a16fcfadf2"
	Sep 29 13:27:12 embed-certs-644246 kubelet[609]: E0929 13:27:12.276421     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wfz7q_kubernetes-dashboard(eb275674-f63c-414d-965b-7b1134eeec43)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfz7q" podUID="eb275674-f63c-414d-965b-7b1134eeec43"
	Sep 29 13:27:14 embed-certs-644246 kubelet[609]: E0929 13:27:14.276576     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rfr6g" podUID="93751269-985a-4d2f-9768-407c72ae300b"
	Sep 29 13:27:19 embed-certs-644246 kubelet[609]: E0929 13:27:19.277332     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-mt8dc" podUID="5510c590-a7e8-4010-9032-6f9073db87f9"
	Sep 29 13:27:25 embed-certs-644246 kubelet[609]: I0929 13:27:25.276941     609 scope.go:117] "RemoveContainer" containerID="9ee978d11402f0a84191126de64ca8bc1588bbcd8e025f2123de58a16fcfadf2"
	Sep 29 13:27:25 embed-certs-644246 kubelet[609]: E0929 13:27:25.277191     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wfz7q_kubernetes-dashboard(eb275674-f63c-414d-965b-7b1134eeec43)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfz7q" podUID="eb275674-f63c-414d-965b-7b1134eeec43"
	Sep 29 13:27:25 embed-certs-644246 kubelet[609]: E0929 13:27:25.277662     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rfr6g" podUID="93751269-985a-4d2f-9768-407c72ae300b"
	Sep 29 13:27:34 embed-certs-644246 kubelet[609]: E0929 13:27:34.277054     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-mt8dc" podUID="5510c590-a7e8-4010-9032-6f9073db87f9"
	Sep 29 13:27:36 embed-certs-644246 kubelet[609]: E0929 13:27:36.277441     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rfr6g" podUID="93751269-985a-4d2f-9768-407c72ae300b"
	Sep 29 13:27:37 embed-certs-644246 kubelet[609]: I0929 13:27:37.275642     609 scope.go:117] "RemoveContainer" containerID="9ee978d11402f0a84191126de64ca8bc1588bbcd8e025f2123de58a16fcfadf2"
	Sep 29 13:27:37 embed-certs-644246 kubelet[609]: E0929 13:27:37.275833     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wfz7q_kubernetes-dashboard(eb275674-f63c-414d-965b-7b1134eeec43)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfz7q" podUID="eb275674-f63c-414d-965b-7b1134eeec43"
	Sep 29 13:27:46 embed-certs-644246 kubelet[609]: E0929 13:27:46.277279     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-mt8dc" podUID="5510c590-a7e8-4010-9032-6f9073db87f9"
	Sep 29 13:27:49 embed-certs-644246 kubelet[609]: I0929 13:27:49.275877     609 scope.go:117] "RemoveContainer" containerID="9ee978d11402f0a84191126de64ca8bc1588bbcd8e025f2123de58a16fcfadf2"
	Sep 29 13:27:49 embed-certs-644246 kubelet[609]: E0929 13:27:49.276094     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wfz7q_kubernetes-dashboard(eb275674-f63c-414d-965b-7b1134eeec43)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfz7q" podUID="eb275674-f63c-414d-965b-7b1134eeec43"
	Sep 29 13:27:50 embed-certs-644246 kubelet[609]: E0929 13:27:50.279533     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rfr6g" podUID="93751269-985a-4d2f-9768-407c72ae300b"
	Sep 29 13:28:01 embed-certs-644246 kubelet[609]: E0929 13:28:01.277263     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-mt8dc" podUID="5510c590-a7e8-4010-9032-6f9073db87f9"
	Sep 29 13:28:04 embed-certs-644246 kubelet[609]: I0929 13:28:04.276127     609 scope.go:117] "RemoveContainer" containerID="9ee978d11402f0a84191126de64ca8bc1588bbcd8e025f2123de58a16fcfadf2"
	Sep 29 13:28:04 embed-certs-644246 kubelet[609]: E0929 13:28:04.276289     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wfz7q_kubernetes-dashboard(eb275674-f63c-414d-965b-7b1134eeec43)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfz7q" podUID="eb275674-f63c-414d-965b-7b1134eeec43"
	Sep 29 13:28:04 embed-certs-644246 kubelet[609]: E0929 13:28:04.277130     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rfr6g" podUID="93751269-985a-4d2f-9768-407c72ae300b"
	Sep 29 13:28:13 embed-certs-644246 kubelet[609]: E0929 13:28:13.276827     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-mt8dc" podUID="5510c590-a7e8-4010-9032-6f9073db87f9"
	Sep 29 13:28:16 embed-certs-644246 kubelet[609]: E0929 13:28:16.276854     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rfr6g" podUID="93751269-985a-4d2f-9768-407c72ae300b"
	Sep 29 13:28:17 embed-certs-644246 kubelet[609]: I0929 13:28:17.280030     609 scope.go:117] "RemoveContainer" containerID="9ee978d11402f0a84191126de64ca8bc1588bbcd8e025f2123de58a16fcfadf2"
	Sep 29 13:28:17 embed-certs-644246 kubelet[609]: E0929 13:28:17.280238     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wfz7q_kubernetes-dashboard(eb275674-f63c-414d-965b-7b1134eeec43)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wfz7q" podUID="eb275674-f63c-414d-965b-7b1134eeec43"
	Sep 29 13:28:28 embed-certs-644246 kubelet[609]: E0929 13:28:28.277338     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-mt8dc" podUID="5510c590-a7e8-4010-9032-6f9073db87f9"
	
	
	==> storage-provisioner [41d326ca331192a3ff6005cb92d6ee67bbc962d23107ba44a96e3d44fed63d52] <==
	I0929 13:09:49.052178       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 13:10:19.055836       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [89d5dd60e7ea9b3d05940cb04376573f4d2e879ecf1a726ab40a5fdc0f6beb26] <==
	W0929 13:28:04.619910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:28:06.623336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:28:06.626891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:28:08.630215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:28:08.633874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:28:10.637478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:28:10.641589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:28:12.645150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:28:12.650659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:28:14.653786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:28:14.657878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:28:16.661473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:28:16.664949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:28:18.669116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:28:18.673929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:28:20.677546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:28:20.682642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:28:22.685879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:28:22.692983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:28:24.696657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:28:24.700457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:28:26.704236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:28:26.709463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:28:28.712191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:28:28.716121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-644246 -n embed-certs-644246
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-644246 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-mt8dc kubernetes-dashboard-855c9754f9-rfr6g
helpers_test.go:282: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context embed-certs-644246 describe pod metrics-server-746fcd58dc-mt8dc kubernetes-dashboard-855c9754f9-rfr6g
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-644246 describe pod metrics-server-746fcd58dc-mt8dc kubernetes-dashboard-855c9754f9-rfr6g: exit status 1 (63.515547ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-mt8dc" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-rfr6g" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context embed-certs-644246 describe pod metrics-server-746fcd58dc-mt8dc kubernetes-dashboard-855c9754f9-rfr6g: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (542.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (542.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-95jmk" [010dbb38-5dfe-41e9-a655-0c6d4115135a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0929 13:20:28.892223 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/auto-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:20:39.707528 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:20:58.708570 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/kindnet-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-554589 -n no-preload-554589
start_stop_delete_test.go:285: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-29 13:29:26.973471075 +0000 UTC m=+4331.557494266
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context no-preload-554589 describe po kubernetes-dashboard-855c9754f9-95jmk -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context no-preload-554589 describe po kubernetes-dashboard-855c9754f9-95jmk -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-95jmk
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             no-preload-554589/192.168.94.2
Start Time:       Mon, 29 Sep 2025 13:10:49 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-st8jd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-st8jd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-95jmk to no-preload-554589
Normal   Pulling    15m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     15m (x5 over 18m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     15m (x5 over 18m)     kubelet            Error: ErrImagePull
Normal   BackOff    3m25s (x63 over 18m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     3m25s (x63 over 18m)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context no-preload-554589 logs kubernetes-dashboard-855c9754f9-95jmk -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-554589 logs kubernetes-dashboard-855c9754f9-95jmk -n kubernetes-dashboard: exit status 1 (70.432843ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-95jmk" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context no-preload-554589 logs kubernetes-dashboard-855c9754f9-95jmk -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-554589 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-554589
helpers_test.go:243: (dbg) docker inspect no-preload-554589:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d8190e936dbd53573e6799b0cec0b471da8c9ec199e5d43e68f52abd020fff94",
	        "Created": "2025-09-29T13:09:10.342280688Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1431160,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T13:10:36.121342788Z",
	            "FinishedAt": "2025-09-29T13:10:35.305865245Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/d8190e936dbd53573e6799b0cec0b471da8c9ec199e5d43e68f52abd020fff94/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d8190e936dbd53573e6799b0cec0b471da8c9ec199e5d43e68f52abd020fff94/hostname",
	        "HostsPath": "/var/lib/docker/containers/d8190e936dbd53573e6799b0cec0b471da8c9ec199e5d43e68f52abd020fff94/hosts",
	        "LogPath": "/var/lib/docker/containers/d8190e936dbd53573e6799b0cec0b471da8c9ec199e5d43e68f52abd020fff94/d8190e936dbd53573e6799b0cec0b471da8c9ec199e5d43e68f52abd020fff94-json.log",
	        "Name": "/no-preload-554589",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-554589:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-554589",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d8190e936dbd53573e6799b0cec0b471da8c9ec199e5d43e68f52abd020fff94",
	                "LowerDir": "/var/lib/docker/overlay2/aa80ce01923e8ae555d2846135b0f843a88454a81977d5d3d5ffc6a21942166c-init/diff:/var/lib/docker/overlay2/fbd0ff8837aea1062458ef3b6c2ff01f7caaf77470820d108a1f7ca188c98aa7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aa80ce01923e8ae555d2846135b0f843a88454a81977d5d3d5ffc6a21942166c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aa80ce01923e8ae555d2846135b0f843a88454a81977d5d3d5ffc6a21942166c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aa80ce01923e8ae555d2846135b0f843a88454a81977d5d3d5ffc6a21942166c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-554589",
	                "Source": "/var/lib/docker/volumes/no-preload-554589/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-554589",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-554589",
	                "name.minikube.sigs.k8s.io": "no-preload-554589",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "607f41e6a298d3adb3d27debd3320db6d26c567d901e2f3451e88d6743e1fd6b",
	            "SandboxKey": "/var/run/docker/netns/607f41e6a298",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33611"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33612"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33615"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33613"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33614"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-554589": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:00:17:05:95:a9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "263b2a799048a278e68c13f92637797019eb9f5749eb8ada73792b87bdd5d9d4",
	                    "EndpointID": "fc94ef3df882bab10e575a32b4c97a8593affc9019145e0498fb54cd7405a90b",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-554589",
	                        "d8190e936dbd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-554589 -n no-preload-554589
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-554589 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-554589 logs -n 25: (1.487470381s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ start   │ -p default-k8s-diff-port-625526 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-625526 │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-625526 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ default-k8s-diff-port-625526 │ jenkins │ v1.37.0 │ 29 Sep 25 13:22 UTC │ 29 Sep 25 13:22 UTC │
	│ stop    │ -p default-k8s-diff-port-625526 --alsologtostderr -v=3                                                                                                                                                                                              │ default-k8s-diff-port-625526 │ jenkins │ v1.37.0 │ 29 Sep 25 13:22 UTC │ 29 Sep 25 13:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-625526 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ default-k8s-diff-port-625526 │ jenkins │ v1.37.0 │ 29 Sep 25 13:22 UTC │ 29 Sep 25 13:22 UTC │
	│ start   │ -p default-k8s-diff-port-625526 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-625526 │ jenkins │ v1.37.0 │ 29 Sep 25 13:22 UTC │ 29 Sep 25 13:23 UTC │
	│ image   │ old-k8s-version-495121 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-495121       │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ pause   │ -p old-k8s-version-495121 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-495121       │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ unpause │ -p old-k8s-version-495121 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-495121       │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ delete  │ -p old-k8s-version-495121                                                                                                                                                                                                                           │ old-k8s-version-495121       │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ delete  │ -p old-k8s-version-495121                                                                                                                                                                                                                           │ old-k8s-version-495121       │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ start   │ -p newest-cni-740698 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0 │ newest-cni-740698            │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ image   │ embed-certs-644246 image list --format=json                                                                                                                                                                                                         │ embed-certs-644246           │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ pause   │ -p embed-certs-644246 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-644246           │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ unpause │ -p embed-certs-644246 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-644246           │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ delete  │ -p embed-certs-644246                                                                                                                                                                                                                               │ embed-certs-644246           │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ delete  │ -p embed-certs-644246                                                                                                                                                                                                                               │ embed-certs-644246           │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ addons  │ enable metrics-server -p newest-cni-740698 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-740698            │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ stop    │ -p newest-cni-740698 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-740698            │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ addons  │ enable dashboard -p newest-cni-740698 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-740698            │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ start   │ -p newest-cni-740698 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0 │ newest-cni-740698            │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ image   │ newest-cni-740698 image list --format=json                                                                                                                                                                                                          │ newest-cni-740698            │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ pause   │ -p newest-cni-740698 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-740698            │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ unpause │ -p newest-cni-740698 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-740698            │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:29 UTC │
	│ delete  │ -p newest-cni-740698                                                                                                                                                                                                                                │ newest-cni-740698            │ jenkins │ v1.37.0 │ 29 Sep 25 13:29 UTC │ 29 Sep 25 13:29 UTC │
	│ delete  │ -p newest-cni-740698                                                                                                                                                                                                                                │ newest-cni-740698            │ jenkins │ v1.37.0 │ 29 Sep 25 13:29 UTC │ 29 Sep 25 13:29 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 13:28:47
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 13:28:47.629010 1465110 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:28:47.629105 1465110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:28:47.629112 1465110 out.go:374] Setting ErrFile to fd 2...
	I0929 13:28:47.629116 1465110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:28:47.629362 1465110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1097891/.minikube/bin
	I0929 13:28:47.629827 1465110 out.go:368] Setting JSON to false
	I0929 13:28:47.631050 1465110 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":22265,"bootTime":1759130263,"procs":282,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 13:28:47.631151 1465110 start.go:140] virtualization: kvm guest
	I0929 13:28:47.632789 1465110 out.go:179] * [newest-cni-740698] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 13:28:47.633869 1465110 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 13:28:47.633868 1465110 notify.go:220] Checking for updates...
	I0929 13:28:47.635694 1465110 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 13:28:47.636760 1465110 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 13:28:47.637754 1465110 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1097891/.minikube
	I0929 13:28:47.638765 1465110 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 13:28:47.639953 1465110 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 13:28:47.641486 1465110 config.go:182] Loaded profile config "newest-cni-740698": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 13:28:47.642019 1465110 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 13:28:47.665832 1465110 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 13:28:47.665953 1465110 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:28:47.723794 1465110 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-29 13:28:47.714011734 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:28:47.723910 1465110 docker.go:318] overlay module found
	I0929 13:28:47.725515 1465110 out.go:179] * Using the docker driver based on existing profile
	I0929 13:28:47.726501 1465110 start.go:304] selected driver: docker
	I0929 13:28:47.726514 1465110 start.go:924] validating driver "docker" against &{Name:newest-cni-740698 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-740698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:28:47.726592 1465110 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 13:28:47.727137 1465110 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:28:47.786604 1465110 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-29 13:28:47.775360933 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:28:47.786935 1465110 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0929 13:28:47.786994 1465110 cni.go:84] Creating CNI manager for ""
	I0929 13:28:47.787058 1465110 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0929 13:28:47.787138 1465110 start.go:348] cluster config:
	{Name:newest-cni-740698 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-740698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:28:47.788845 1465110 out.go:179] * Starting "newest-cni-740698" primary control-plane node in "newest-cni-740698" cluster
	I0929 13:28:47.789698 1465110 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0929 13:28:47.790619 1465110 out.go:179] * Pulling base image v0.0.48 ...
	I0929 13:28:47.791466 1465110 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0929 13:28:47.791515 1465110 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0929 13:28:47.791538 1465110 cache.go:58] Caching tarball of preloaded images
	I0929 13:28:47.791581 1465110 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 13:28:47.791656 1465110 preload.go:172] Found /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0929 13:28:47.791668 1465110 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0929 13:28:47.791790 1465110 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698/config.json ...
	I0929 13:28:47.814442 1465110 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 13:28:47.814461 1465110 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 13:28:47.814477 1465110 cache.go:232] Successfully downloaded all kic artifacts
	I0929 13:28:47.814502 1465110 start.go:360] acquireMachinesLock for newest-cni-740698: {Name:mkf40a81be102ef43d2455f2435b32c6c1c894a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:28:47.814571 1465110 start.go:364] duration metric: took 41.549µs to acquireMachinesLock for "newest-cni-740698"
	I0929 13:28:47.814589 1465110 start.go:96] Skipping create...Using existing machine configuration
	I0929 13:28:47.814597 1465110 fix.go:54] fixHost starting: 
	I0929 13:28:47.814799 1465110 cli_runner.go:164] Run: docker container inspect newest-cni-740698 --format={{.State.Status}}
	I0929 13:28:47.833656 1465110 fix.go:112] recreateIfNeeded on newest-cni-740698: state=Stopped err=<nil>
	W0929 13:28:47.833696 1465110 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 13:28:47.834947 1465110 out.go:252] * Restarting existing docker container for "newest-cni-740698" ...
	I0929 13:28:47.835039 1465110 cli_runner.go:164] Run: docker start newest-cni-740698
	I0929 13:28:48.076859 1465110 cli_runner.go:164] Run: docker container inspect newest-cni-740698 --format={{.State.Status}}
	I0929 13:28:48.095440 1465110 kic.go:430] container "newest-cni-740698" state is running.
	I0929 13:28:48.095808 1465110 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-740698
	I0929 13:28:48.114058 1465110 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698/config.json ...
	I0929 13:28:48.114307 1465110 machine.go:93] provisionDockerMachine start ...
	I0929 13:28:48.114405 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:48.133581 1465110 main.go:141] libmachine: Using SSH client type: native
	I0929 13:28:48.133843 1465110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33631 <nil> <nil>}
	I0929 13:28:48.133858 1465110 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 13:28:48.134500 1465110 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46200->127.0.0.1:33631: read: connection reset by peer
	I0929 13:28:51.270952 1465110 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-740698
	
	I0929 13:28:51.270992 1465110 ubuntu.go:182] provisioning hostname "newest-cni-740698"
	I0929 13:28:51.271069 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:51.289296 1465110 main.go:141] libmachine: Using SSH client type: native
	I0929 13:28:51.289545 1465110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33631 <nil> <nil>}
	I0929 13:28:51.289560 1465110 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-740698 && echo "newest-cni-740698" | sudo tee /etc/hostname
	I0929 13:28:51.438761 1465110 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-740698
	
	I0929 13:28:51.438840 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:51.457877 1465110 main.go:141] libmachine: Using SSH client type: native
	I0929 13:28:51.458135 1465110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33631 <nil> <nil>}
	I0929 13:28:51.458154 1465110 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-740698' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-740698/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-740698' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 13:28:51.593410 1465110 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 13:28:51.593449 1465110 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-1097891/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-1097891/.minikube}
	I0929 13:28:51.593481 1465110 ubuntu.go:190] setting up certificates
	I0929 13:28:51.593495 1465110 provision.go:84] configureAuth start
	I0929 13:28:51.593550 1465110 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-740698
	I0929 13:28:51.611525 1465110 provision.go:143] copyHostCerts
	I0929 13:28:51.611591 1465110 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.pem, removing ...
	I0929 13:28:51.611615 1465110 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.pem
	I0929 13:28:51.611700 1465110 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.pem (1078 bytes)
	I0929 13:28:51.611825 1465110 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1097891/.minikube/cert.pem, removing ...
	I0929 13:28:51.611837 1465110 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1097891/.minikube/cert.pem
	I0929 13:28:51.611881 1465110 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-1097891/.minikube/cert.pem (1123 bytes)
	I0929 13:28:51.611991 1465110 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1097891/.minikube/key.pem, removing ...
	I0929 13:28:51.612001 1465110 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1097891/.minikube/key.pem
	I0929 13:28:51.612053 1465110 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-1097891/.minikube/key.pem (1679 bytes)
	I0929 13:28:51.612145 1465110 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca-key.pem org=jenkins.newest-cni-740698 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-740698]
	I0929 13:28:51.883873 1465110 provision.go:177] copyRemoteCerts
	I0929 13:28:51.883933 1465110 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 13:28:51.883991 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:51.903374 1465110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33631 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/newest-cni-740698/id_rsa Username:docker}
	I0929 13:28:52.001398 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 13:28:52.027859 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 13:28:52.052634 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0929 13:28:52.076437 1465110 provision.go:87] duration metric: took 482.92934ms to configureAuth
	I0929 13:28:52.076472 1465110 ubuntu.go:206] setting minikube options for container-runtime
	I0929 13:28:52.076652 1465110 config.go:182] Loaded profile config "newest-cni-740698": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 13:28:52.076664 1465110 machine.go:96] duration metric: took 3.962343403s to provisionDockerMachine
	I0929 13:28:52.076673 1465110 start.go:293] postStartSetup for "newest-cni-740698" (driver="docker")
	I0929 13:28:52.076684 1465110 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 13:28:52.076733 1465110 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 13:28:52.076772 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:52.094150 1465110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33631 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/newest-cni-740698/id_rsa Username:docker}
	I0929 13:28:52.191088 1465110 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 13:28:52.194641 1465110 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 13:28:52.194668 1465110 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 13:28:52.194676 1465110 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 13:28:52.194684 1465110 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 13:28:52.194695 1465110 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1097891/.minikube/addons for local assets ...
	I0929 13:28:52.194737 1465110 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1097891/.minikube/files for local assets ...
	I0929 13:28:52.194818 1465110 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem -> 11014942.pem in /etc/ssl/certs
	I0929 13:28:52.194917 1465110 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 13:28:52.204323 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem --> /etc/ssl/certs/11014942.pem (1708 bytes)
	I0929 13:28:52.230005 1465110 start.go:296] duration metric: took 153.302822ms for postStartSetup
	I0929 13:28:52.230084 1465110 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 13:28:52.230135 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:52.248054 1465110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33631 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/newest-cni-740698/id_rsa Username:docker}
	I0929 13:28:52.342555 1465110 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 13:28:52.347137 1465110 fix.go:56] duration metric: took 4.532532077s for fixHost
	I0929 13:28:52.347165 1465110 start.go:83] releasing machines lock for "newest-cni-740698", held for 4.532582488s
	I0929 13:28:52.347237 1465110 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-740698
	I0929 13:28:52.364912 1465110 ssh_runner.go:195] Run: cat /version.json
	I0929 13:28:52.364957 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:52.365051 1465110 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 13:28:52.365121 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:52.382974 1465110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33631 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/newest-cni-740698/id_rsa Username:docker}
	I0929 13:28:52.383162 1465110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33631 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/newest-cni-740698/id_rsa Username:docker}
	I0929 13:28:52.554416 1465110 ssh_runner.go:195] Run: systemctl --version
	I0929 13:28:52.559399 1465110 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 13:28:52.563991 1465110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 13:28:52.583272 1465110 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 13:28:52.583349 1465110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 13:28:52.592814 1465110 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 13:28:52.592837 1465110 start.go:495] detecting cgroup driver to use...
	I0929 13:28:52.592867 1465110 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 13:28:52.592905 1465110 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0929 13:28:52.606487 1465110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 13:28:52.618517 1465110 docker.go:218] disabling cri-docker service (if available) ...
	I0929 13:28:52.618560 1465110 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 13:28:52.631757 1465110 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 13:28:52.644305 1465110 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 13:28:52.709227 1465110 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 13:28:52.775137 1465110 docker.go:234] disabling docker service ...
	I0929 13:28:52.775221 1465110 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 13:28:52.788059 1465110 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 13:28:52.799783 1465110 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 13:28:52.864439 1465110 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 13:28:52.929637 1465110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 13:28:52.941537 1465110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 13:28:52.958075 1465110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 13:28:52.968107 1465110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 13:28:52.978062 1465110 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0929 13:28:52.978121 1465110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0929 13:28:52.988006 1465110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 13:28:52.997660 1465110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 13:28:53.007646 1465110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 13:28:53.017544 1465110 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 13:28:53.026981 1465110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 13:28:53.037048 1465110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 13:28:53.047149 1465110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 13:28:53.057156 1465110 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 13:28:53.065634 1465110 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 13:28:53.074061 1465110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:28:53.138360 1465110 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 13:28:53.242303 1465110 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0929 13:28:53.242387 1465110 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0929 13:28:53.246564 1465110 start.go:563] Will wait 60s for crictl version
	I0929 13:28:53.246638 1465110 ssh_runner.go:195] Run: which crictl
	I0929 13:28:53.250428 1465110 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 13:28:53.285386 1465110 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0929 13:28:53.285461 1465110 ssh_runner.go:195] Run: containerd --version
	I0929 13:28:53.311246 1465110 ssh_runner.go:195] Run: containerd --version
	I0929 13:28:53.337354 1465110 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0929 13:28:53.338401 1465110 cli_runner.go:164] Run: docker network inspect newest-cni-740698 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:28:53.355730 1465110 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0929 13:28:53.360006 1465110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:28:53.373781 1465110 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0929 13:28:53.374806 1465110 kubeadm.go:875] updating cluster {Name:newest-cni-740698 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-740698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 13:28:53.374940 1465110 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0929 13:28:53.375027 1465110 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 13:28:53.409703 1465110 containerd.go:627] all images are preloaded for containerd runtime.
	I0929 13:28:53.409723 1465110 containerd.go:534] Images already preloaded, skipping extraction
	I0929 13:28:53.409781 1465110 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 13:28:53.446238 1465110 containerd.go:627] all images are preloaded for containerd runtime.
	I0929 13:28:53.446258 1465110 cache_images.go:85] Images are preloaded, skipping loading
	I0929 13:28:53.446266 1465110 kubeadm.go:926] updating node { 192.168.85.2 8443 v1.34.0 containerd true true} ...
	I0929 13:28:53.446366 1465110 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-740698 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:newest-cni-740698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 13:28:53.446423 1465110 ssh_runner.go:195] Run: sudo crictl info
	I0929 13:28:53.482332 1465110 cni.go:84] Creating CNI manager for ""
	I0929 13:28:53.482352 1465110 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0929 13:28:53.482361 1465110 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0929 13:28:53.482383 1465110 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-740698 NodeName:newest-cni-740698 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 13:28:53.482515 1465110 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-740698"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 13:28:53.482573 1465110 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 13:28:53.492790 1465110 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 13:28:53.492848 1465110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 13:28:53.502450 1465110 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0929 13:28:53.520767 1465110 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 13:28:53.541161 1465110 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I0929 13:28:53.559905 1465110 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0929 13:28:53.563697 1465110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:28:53.575325 1465110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:28:53.644353 1465110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:28:53.667523 1465110 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698 for IP: 192.168.85.2
	I0929 13:28:53.667547 1465110 certs.go:194] generating shared ca certs ...
	I0929 13:28:53.667566 1465110 certs.go:226] acquiring lock for ca certs: {Name:mk80f04796163f71154dbe6468cabd937b3d9c9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:28:53.667743 1465110 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.key
	I0929 13:28:53.667829 1465110 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/proxy-client-ca.key
	I0929 13:28:53.667849 1465110 certs.go:256] generating profile certs ...
	I0929 13:28:53.667989 1465110 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698/client.key
	I0929 13:28:53.668064 1465110 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698/apiserver.key.abc54583
	I0929 13:28:53.668121 1465110 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698/proxy-client.key
	I0929 13:28:53.668255 1465110 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/1101494.pem (1338 bytes)
	W0929 13:28:53.668287 1465110 certs.go:480] ignoring /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/1101494_empty.pem, impossibly tiny 0 bytes
	I0929 13:28:53.668299 1465110 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 13:28:53.668331 1465110 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem (1078 bytes)
	I0929 13:28:53.668365 1465110 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem (1123 bytes)
	I0929 13:28:53.668397 1465110 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/key.pem (1679 bytes)
	I0929 13:28:53.668454 1465110 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem (1708 bytes)
	I0929 13:28:53.669280 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 13:28:53.696915 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I0929 13:28:53.725095 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 13:28:53.759173 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 13:28:53.787114 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0929 13:28:53.812387 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0929 13:28:53.837455 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 13:28:53.864150 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0929 13:28:53.892727 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem --> /usr/share/ca-certificates/11014942.pem (1708 bytes)
	I0929 13:28:53.918653 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 13:28:53.944563 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/1101494.pem --> /usr/share/ca-certificates/1101494.pem (1338 bytes)
	I0929 13:28:53.970121 1465110 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 13:28:53.988778 1465110 ssh_runner.go:195] Run: openssl version
	I0929 13:28:53.994749 1465110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11014942.pem && ln -fs /usr/share/ca-certificates/11014942.pem /etc/ssl/certs/11014942.pem"
	I0929 13:28:54.004933 1465110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11014942.pem
	I0929 13:28:54.008622 1465110 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 12:23 /usr/share/ca-certificates/11014942.pem
	I0929 13:28:54.008720 1465110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11014942.pem
	I0929 13:28:54.015872 1465110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11014942.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 13:28:54.025519 1465110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 13:28:54.035467 1465110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:28:54.039540 1465110 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 12:18 /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:28:54.039596 1465110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:28:54.047058 1465110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 13:28:54.056922 1465110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1101494.pem && ln -fs /usr/share/ca-certificates/1101494.pem /etc/ssl/certs/1101494.pem"
	I0929 13:28:54.066836 1465110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1101494.pem
	I0929 13:28:54.070330 1465110 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 12:23 /usr/share/ca-certificates/1101494.pem
	I0929 13:28:54.070369 1465110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1101494.pem
	I0929 13:28:54.077657 1465110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1101494.pem /etc/ssl/certs/51391683.0"
	I0929 13:28:54.087032 1465110 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 13:28:54.090728 1465110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 13:28:54.097689 1465110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 13:28:54.104122 1465110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 13:28:54.110565 1465110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 13:28:54.117388 1465110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 13:28:54.123946 1465110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 13:28:54.130630 1465110 kubeadm.go:392] StartCluster: {Name:newest-cni-740698 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-740698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:28:54.130735 1465110 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0929 13:28:54.130798 1465110 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 13:28:54.167363 1465110 cri.go:89] found id: "36fd60ad8b5f43506f08923872ee0aac518a04a1fbe0bd7231ed286722550d61"
	I0929 13:28:54.167386 1465110 cri.go:89] found id: "85a7d209414fd13547d72f843320321f35203a7a91f1d1d949bcbef472b56b42"
	I0929 13:28:54.167389 1465110 cri.go:89] found id: "7ad3eea1da3e1387bfbcdf334ec1f656e07e5923ea432d735dd02eb19cac0365"
	I0929 13:28:54.167392 1465110 cri.go:89] found id: "f52b4638fb9b2c69f79356c1efc63df5afbd181951d1758d972cc553ffbc5dba"
	I0929 13:28:54.167395 1465110 cri.go:89] found id: "9ea775d7aaeeeac021fb1008a123392683e633a680971b8a0c1d0ce312bb1530"
	I0929 13:28:54.167397 1465110 cri.go:89] found id: "e7cf3d47f09c990445369fe3081a61f3b660acc90dc8849ee297eefb91ad2462"
	I0929 13:28:54.167408 1465110 cri.go:89] found id: "00afc0aa272581aa861d734fd4eff7e4d7b47a8a679dd8459df8264c4766bf57"
	I0929 13:28:54.167411 1465110 cri.go:89] found id: ""
	I0929 13:28:54.167452 1465110 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0929 13:28:54.182182 1465110 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-29T13:28:54Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0929 13:28:54.182268 1465110 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 13:28:54.194234 1465110 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 13:28:54.194257 1465110 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 13:28:54.194302 1465110 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 13:28:54.207740 1465110 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 13:28:54.208661 1465110 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-740698" does not appear in /home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 13:28:54.209320 1465110 kubeconfig.go:62] /home/jenkins/minikube-integration/21652-1097891/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-740698" cluster setting kubeconfig missing "newest-cni-740698" context setting]
	I0929 13:28:54.210396 1465110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/kubeconfig: {Name:mk343611c88fd6ad36810bb377f9a0ca463784db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:28:54.213105 1465110 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 13:28:54.227105 1465110 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.85.2
	I0929 13:28:54.227188 1465110 kubeadm.go:593] duration metric: took 32.922621ms to restartPrimaryControlPlane
	I0929 13:28:54.227204 1465110 kubeadm.go:394] duration metric: took 96.582969ms to StartCluster
	I0929 13:28:54.227267 1465110 settings.go:142] acquiring lock: {Name:mk967ab7b412f5ea13a8bdbc3d08e00d0ec4417f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:28:54.227417 1465110 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 13:28:54.229069 1465110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/kubeconfig: {Name:mk343611c88fd6ad36810bb377f9a0ca463784db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:28:54.229359 1465110 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0929 13:28:54.229589 1465110 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 13:28:54.229695 1465110 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-740698"
	I0929 13:28:54.229719 1465110 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-740698"
	I0929 13:28:54.229785 1465110 config.go:182] Loaded profile config "newest-cni-740698": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 13:28:54.229795 1465110 addons.go:69] Setting default-storageclass=true in profile "newest-cni-740698"
	I0929 13:28:54.229812 1465110 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-740698"
	I0929 13:28:54.229847 1465110 addons.go:69] Setting metrics-server=true in profile "newest-cni-740698"
	I0929 13:28:54.229863 1465110 addons.go:238] Setting addon metrics-server=true in "newest-cni-740698"
	W0929 13:28:54.229871 1465110 addons.go:247] addon metrics-server should already be in state true
	I0929 13:28:54.229908 1465110 host.go:66] Checking if "newest-cni-740698" exists ...
	W0929 13:28:54.229922 1465110 addons.go:247] addon storage-provisioner should already be in state true
	I0929 13:28:54.229954 1465110 host.go:66] Checking if "newest-cni-740698" exists ...
	I0929 13:28:54.230018 1465110 addons.go:69] Setting dashboard=true in profile "newest-cni-740698"
	I0929 13:28:54.230040 1465110 addons.go:238] Setting addon dashboard=true in "newest-cni-740698"
	W0929 13:28:54.230059 1465110 addons.go:247] addon dashboard should already be in state true
	I0929 13:28:54.230085 1465110 host.go:66] Checking if "newest-cni-740698" exists ...
	I0929 13:28:54.230479 1465110 cli_runner.go:164] Run: docker container inspect newest-cni-740698 --format={{.State.Status}}
	I0929 13:28:54.230614 1465110 cli_runner.go:164] Run: docker container inspect newest-cni-740698 --format={{.State.Status}}
	I0929 13:28:54.230497 1465110 cli_runner.go:164] Run: docker container inspect newest-cni-740698 --format={{.State.Status}}
	I0929 13:28:54.230854 1465110 cli_runner.go:164] Run: docker container inspect newest-cni-740698 --format={{.State.Status}}
	I0929 13:28:54.232563 1465110 out.go:179] * Verifying Kubernetes components...
	I0929 13:28:54.236717 1465110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:28:54.260047 1465110 addons.go:238] Setting addon default-storageclass=true in "newest-cni-740698"
	W0929 13:28:54.260075 1465110 addons.go:247] addon default-storageclass should already be in state true
	I0929 13:28:54.260109 1465110 host.go:66] Checking if "newest-cni-740698" exists ...
	I0929 13:28:54.260230 1465110 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 13:28:54.260637 1465110 cli_runner.go:164] Run: docker container inspect newest-cni-740698 --format={{.State.Status}}
	I0929 13:28:54.261415 1465110 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 13:28:54.261436 1465110 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 13:28:54.261589 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:54.263389 1465110 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 13:28:54.264722 1465110 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 13:28:54.264809 1465110 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:28:54.264925 1465110 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 13:28:54.265009 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:54.267056 1465110 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 13:28:54.267927 1465110 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 13:28:54.267947 1465110 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 13:28:54.268035 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:54.291623 1465110 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 13:28:54.291658 1465110 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 13:28:54.291740 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:54.296013 1465110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33631 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/newest-cni-740698/id_rsa Username:docker}
	I0929 13:28:54.312876 1465110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33631 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/newest-cni-740698/id_rsa Username:docker}
	I0929 13:28:54.323245 1465110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33631 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/newest-cni-740698/id_rsa Username:docker}
	I0929 13:28:54.326917 1465110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33631 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/newest-cni-740698/id_rsa Username:docker}
	I0929 13:28:54.391936 1465110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:28:54.415344 1465110 api_server.go:52] waiting for apiserver process to appear ...
	I0929 13:28:54.415421 1465110 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 13:28:54.433355 1465110 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 13:28:54.433382 1465110 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 13:28:54.440764 1465110 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:28:54.458141 1465110 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 13:28:54.458173 1465110 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 13:28:54.465310 1465110 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 13:28:54.467810 1465110 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 13:28:54.467831 1465110 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 13:28:54.497250 1465110 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 13:28:54.497286 1465110 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 13:28:54.497309 1465110 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 13:28:54.497326 1465110 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 13:28:54.529254 1465110 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 13:28:54.529283 1465110 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 13:28:54.532022 1465110 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0929 13:28:54.538659 1465110 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 13:28:54.538708 1465110 retry.go:31] will retry after 306.798742ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 13:28:54.563503 1465110 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 13:28:54.563535 1465110 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0929 13:28:54.589041 1465110 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 13:28:54.589068 1465110 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0929 13:28:54.617100 1465110 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 13:28:54.617132 1465110 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0929 13:28:54.647917 1465110 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 13:28:54.647952 1465110 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 13:28:54.678004 1465110 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 13:28:54.678030 1465110 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 13:28:54.698888 1465110 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:28:54.698915 1465110 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 13:28:54.717149 1465110 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:28:54.845675 1465110 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:28:54.916311 1465110 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 13:28:56.202046 1465110 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.736694687s)
	I0929 13:28:56.651351 1465110 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.119282964s)
	I0929 13:28:56.651401 1465110 addons.go:479] Verifying addon metrics-server=true in "newest-cni-740698"
	I0929 13:28:56.651456 1465110 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.934266797s)
	I0929 13:28:56.652822 1465110 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-740698 addons enable metrics-server
	
	I0929 13:28:56.694191 1465110 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.848477294s)
	I0929 13:28:56.694272 1465110 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.777927453s)
	I0929 13:28:56.694309 1465110 api_server.go:72] duration metric: took 2.464920673s to wait for apiserver process to appear ...
	I0929 13:28:56.694400 1465110 api_server.go:88] waiting for apiserver healthz status ...
	I0929 13:28:56.694433 1465110 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:28:56.695636 1465110 out.go:179] * Enabled addons: default-storageclass, metrics-server, dashboard, storage-provisioner
	I0929 13:28:56.696536 1465110 addons.go:514] duration metric: took 2.466970901s for enable addons: enabled=[default-storageclass metrics-server dashboard storage-provisioner]
	I0929 13:28:56.698554 1465110 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:28:56.698577 1465110 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:28:57.195196 1465110 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:28:57.201196 1465110 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:28:57.201230 1465110 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:28:57.694973 1465110 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:28:57.699509 1465110 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0929 13:28:57.700692 1465110 api_server.go:141] control plane version: v1.34.0
	I0929 13:28:57.700722 1465110 api_server.go:131] duration metric: took 1.006311167s to wait for apiserver health ...
	I0929 13:28:57.700734 1465110 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 13:28:57.704357 1465110 system_pods.go:59] 9 kube-system pods found
	I0929 13:28:57.704397 1465110 system_pods.go:61] "coredns-66bc5c9577-g22nn" [9ef181d0-e9e8-4118-be6a-82c8fc1b9262] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:28:57.704407 1465110 system_pods.go:61] "etcd-newest-cni-740698" [5b7ff3a3-7c62-4c27-a650-b0b16bb740cb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:28:57.704420 1465110 system_pods.go:61] "kindnet-r7p4j" [35989a73-e8b9-4fc4-a0b9-e95c31cc7a61] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0929 13:28:57.704426 1465110 system_pods.go:61] "kube-apiserver-newest-cni-740698" [046eaa92-bda2-4f34-b0b9-38f5ca2aee74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:28:57.704460 1465110 system_pods.go:61] "kube-controller-manager-newest-cni-740698" [cc43f598-9fe3-421f-82bb-bc03f0b6022a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:28:57.704470 1465110 system_pods.go:61] "kube-proxy-2csmd" [3abee784-525d-4f16-91f3-83a6f4b2a704] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 13:28:57.704476 1465110 system_pods.go:61] "kube-scheduler-newest-cni-740698" [6f9039a5-c310-4969-9b07-e6e854a43e38] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:28:57.704483 1465110 system_pods.go:61] "metrics-server-746fcd58dc-8n4ts" [5808bc7a-eba9-4a8a-b4f8-8a6218c7dc57] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 13:28:57.704487 1465110 system_pods.go:61] "storage-provisioner" [8b39193c-2d16-4945-bc4e-3b8931f63fff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 13:28:57.704507 1465110 system_pods.go:74] duration metric: took 3.765055ms to wait for pod list to return data ...
	I0929 13:28:57.704517 1465110 default_sa.go:34] waiting for default service account to be created ...
	I0929 13:28:57.706811 1465110 default_sa.go:45] found service account: "default"
	I0929 13:28:57.706831 1465110 default_sa.go:55] duration metric: took 2.307161ms for default service account to be created ...
	I0929 13:28:57.706843 1465110 kubeadm.go:578] duration metric: took 3.477454114s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0929 13:28:57.706858 1465110 node_conditions.go:102] verifying NodePressure condition ...
	I0929 13:28:57.709210 1465110 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 13:28:57.709235 1465110 node_conditions.go:123] node cpu capacity is 8
	I0929 13:28:57.709248 1465110 node_conditions.go:105] duration metric: took 2.385873ms to run NodePressure ...
	I0929 13:28:57.709259 1465110 start.go:241] waiting for startup goroutines ...
	I0929 13:28:57.709266 1465110 start.go:246] waiting for cluster config update ...
	I0929 13:28:57.709279 1465110 start.go:255] writing updated cluster config ...
	I0929 13:28:57.709560 1465110 ssh_runner.go:195] Run: rm -f paused
	I0929 13:28:57.759423 1465110 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 13:28:57.761826 1465110 out.go:179] * Done! kubectl is now configured to use "newest-cni-740698" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	18757e8aef156       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   8                   7eda4e900a673       dashboard-metrics-scraper-6ffb444bf9-mrrdc
	fd251c5d0e784       6e38f40d628db       17 minutes ago      Running             storage-provisioner         3                   dfa171a4ed934       storage-provisioner
	e580b0127dfd6       409467f978b4a       18 minutes ago      Running             kindnet-cni                 1                   05322e95cb718       kindnet-5z49c
	ca01387f12644       56cc512116c8f       18 minutes ago      Running             busybox                     1                   3bf258c532575       busybox
	1d979b14b26ab       52546a367cc9e       18 minutes ago      Running             coredns                     1                   1af1129a1f8b5       coredns-66bc5c9577-6cxff
	83ac731714bb3       6e38f40d628db       18 minutes ago      Exited              storage-provisioner         2                   dfa171a4ed934       storage-provisioner
	5e80633031c5e       df0860106674d       18 minutes ago      Running             kube-proxy                  1                   ee217a6172795       kube-proxy-8kkxk
	68115be5aa4be       46169d968e920       18 minutes ago      Running             kube-scheduler              1                   94b133b22fb5f       kube-scheduler-no-preload-554589
	c35a9627a37f9       5f1f5298c888d       18 minutes ago      Running             etcd                        1                   8ade610893ad6       etcd-no-preload-554589
	b47e2d7f81b4b       a0af72f2ec6d6       18 minutes ago      Running             kube-controller-manager     1                   905d7f020f204       kube-controller-manager-no-preload-554589
	447139fa2a837       90550c43ad2bc       18 minutes ago      Running             kube-apiserver              1                   5666fe097a514       kube-apiserver-no-preload-554589
	795ed730b0d90       56cc512116c8f       19 minutes ago      Exited              busybox                     0                   22d7bc602f236       busybox
	21b59ec52c2f1       52546a367cc9e       19 minutes ago      Exited              coredns                     0                   a7fad0754c7c0       coredns-66bc5c9577-6cxff
	fe92b189cf883       409467f978b4a       19 minutes ago      Exited              kindnet-cni                 0                   9585f7d801570       kindnet-5z49c
	86d180d3fafec       df0860106674d       19 minutes ago      Exited              kube-proxy                  0                   3341dd5c83a4d       kube-proxy-8kkxk
	f157b54ee5632       46169d968e920       19 minutes ago      Exited              kube-scheduler              0                   e9e02814c0a69       kube-scheduler-no-preload-554589
	8c5c1254cf938       5f1f5298c888d       19 minutes ago      Exited              etcd                        0                   68c46b90c3df3       etcd-no-preload-554589
	448fabba6fe89       a0af72f2ec6d6       19 minutes ago      Exited              kube-controller-manager     0                   315586abc9b62       kube-controller-manager-no-preload-554589
	3e59ee92e127e       90550c43ad2bc       19 minutes ago      Exited              kube-apiserver              0                   e29b1bd7bfd94       kube-apiserver-no-preload-554589
	
	
	==> containerd <==
	Sep 29 13:21:49 no-preload-554589 containerd[480]: time="2025-09-29T13:21:49.538083183Z" level=info msg="RemoveContainer for \"25c7849b49de4fcce1432876200168fe14f9dd85fba555879aead2e742a23e49\" returns successfully"
	Sep 29 13:22:00 no-preload-554589 containerd[480]: time="2025-09-29T13:22:00.825423304Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 29 13:22:00 no-preload-554589 containerd[480]: time="2025-09-29T13:22:00.826991269Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 13:22:01 no-preload-554589 containerd[480]: time="2025-09-29T13:22:01.501245667Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 13:22:03 no-preload-554589 containerd[480]: time="2025-09-29T13:22:03.356356008Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 13:22:03 no-preload-554589 containerd[480]: time="2025-09-29T13:22:03.356427892Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Sep 29 13:26:43 no-preload-554589 containerd[480]: time="2025-09-29T13:26:43.825886510Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 29 13:26:43 no-preload-554589 containerd[480]: time="2025-09-29T13:26:43.871607521Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host" host=fake.domain
	Sep 29 13:26:43 no-preload-554589 containerd[480]: time="2025-09-29T13:26:43.872991163Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
	Sep 29 13:26:43 no-preload-554589 containerd[480]: time="2025-09-29T13:26:43.873030257Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 29 13:26:51 no-preload-554589 containerd[480]: time="2025-09-29T13:26:51.827858097Z" level=info msg="CreateContainer within sandbox \"7eda4e900a6738355385f1d1a2c9499fa50e19615570e3d9463f03a9fdb78adc\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,}"
	Sep 29 13:26:51 no-preload-554589 containerd[480]: time="2025-09-29T13:26:51.839876442Z" level=info msg="CreateContainer within sandbox \"7eda4e900a6738355385f1d1a2c9499fa50e19615570e3d9463f03a9fdb78adc\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"18757e8aef15661653386741c20ea27040e66297362881bcfffa6202668603e1\""
	Sep 29 13:26:51 no-preload-554589 containerd[480]: time="2025-09-29T13:26:51.840449886Z" level=info msg="StartContainer for \"18757e8aef15661653386741c20ea27040e66297362881bcfffa6202668603e1\""
	Sep 29 13:26:51 no-preload-554589 containerd[480]: time="2025-09-29T13:26:51.895391102Z" level=info msg="StartContainer for \"18757e8aef15661653386741c20ea27040e66297362881bcfffa6202668603e1\" returns successfully"
	Sep 29 13:26:51 no-preload-554589 containerd[480]: time="2025-09-29T13:26:51.911849545Z" level=info msg="received exit event container_id:\"18757e8aef15661653386741c20ea27040e66297362881bcfffa6202668603e1\"  id:\"18757e8aef15661653386741c20ea27040e66297362881bcfffa6202668603e1\"  pid:3372  exit_status:1  exited_at:{seconds:1759152411  nanos:911566800}"
	Sep 29 13:26:51 no-preload-554589 containerd[480]: time="2025-09-29T13:26:51.934409213Z" level=info msg="shim disconnected" id=18757e8aef15661653386741c20ea27040e66297362881bcfffa6202668603e1 namespace=k8s.io
	Sep 29 13:26:51 no-preload-554589 containerd[480]: time="2025-09-29T13:26:51.934444679Z" level=warning msg="cleaning up after shim disconnected" id=18757e8aef15661653386741c20ea27040e66297362881bcfffa6202668603e1 namespace=k8s.io
	Sep 29 13:26:51 no-preload-554589 containerd[480]: time="2025-09-29T13:26:51.934460458Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 29 13:26:52 no-preload-554589 containerd[480]: time="2025-09-29T13:26:52.282041276Z" level=info msg="RemoveContainer for \"746d64d74fed7002ca37aeb489e202b526746ba3767143a22716d95410bfb7d4\""
	Sep 29 13:26:52 no-preload-554589 containerd[480]: time="2025-09-29T13:26:52.285758834Z" level=info msg="RemoveContainer for \"746d64d74fed7002ca37aeb489e202b526746ba3767143a22716d95410bfb7d4\" returns successfully"
	Sep 29 13:27:13 no-preload-554589 containerd[480]: time="2025-09-29T13:27:13.825698741Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 29 13:27:13 no-preload-554589 containerd[480]: time="2025-09-29T13:27:13.827249115Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 13:27:14 no-preload-554589 containerd[480]: time="2025-09-29T13:27:14.474795142Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 13:27:16 no-preload-554589 containerd[480]: time="2025-09-29T13:27:16.323472343Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 13:27:16 no-preload-554589 containerd[480]: time="2025-09-29T13:27:16.323569539Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	
	
	==> coredns [1d979b14b26abc6c476dc5cfb879e053dbb7e9fdc8eadd3eb4baf4296757d319] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41598 - 16224 "HINFO IN 3872912427259326573.3361768498413974895. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.198772148s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [21b59ec52c2f189cce4c1c71122fb539bab5404609e8d49bc9bc242623c98f2d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51079 - 40449 "HINFO IN 6588487073909697858.1471815918097734234. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.024067209s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               no-preload-554589
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-554589
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=no-preload-554589
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T13_09_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 13:09:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-554589
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 13:29:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 13:24:40 +0000   Mon, 29 Sep 2025 13:09:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 13:24:40 +0000   Mon, 29 Sep 2025 13:09:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 13:24:40 +0000   Mon, 29 Sep 2025 13:09:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 13:24:40 +0000   Mon, 29 Sep 2025 13:09:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-554589
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 7b35982ad03e4f88a9e11d6c8d99da9b
	  System UUID:                1ad1c296-1dd1-4a66-b956-4731b0e0e480
	  Boot ID:                    c950b162-3ea4-4410-8c2e-1238f18b29b9
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-6cxff                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-no-preload-554589                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-5z49c                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-no-preload-554589              250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-no-preload-554589     200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-8kkxk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-no-preload-554589              100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-746fcd58dc-45phl               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         19m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-mrrdc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-95jmk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientPID     19m                kubelet          Node no-preload-554589 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node no-preload-554589 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node no-preload-554589 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-554589 event: Registered Node no-preload-554589 in Controller
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node no-preload-554589 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node no-preload-554589 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node no-preload-554589 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node no-preload-554589 event: Registered Node no-preload-554589 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 a1 f4 28 81 a8 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2e 2f bb 72 d0 bd 08 06
	[  +6.778142] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 83 71 a8 41 1d 08 06
	[  +0.096747] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 43 49 e5 fd fa 08 06
	[Sep29 13:07] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 2d 17 7b b6 88 08 06
	[  +0.000371] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5a 43 49 e5 fd fa 08 06
	[ +37.870699] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 36 61 5e 36 d0 11 08 06
	[Sep29 13:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 3c ea 5f b8 68 08 06
	[  +0.009082] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 a0 7d 1d f4 ea 08 06
	[ +10.861380] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 60 01 bb bd e5 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 61 5e 36 d0 11 08 06
	[ +36.402844] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 73 32 f4 f1 e6 08 06
	[  +0.000316] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 3c ea 5f b8 68 08 06
	
	
	==> etcd [8c5c1254cf9381b1212b778b0bea8cccf2cd1cd3a2b9653e31070bc574cbe9d7] <==
	{"level":"warn","ts":"2025-09-29T13:09:33.261567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.269148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.278543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.293045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.302323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.308923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.316706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.325460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.331559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.347214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.363352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.375364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.383421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.390057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.397353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.405531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.412681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.419279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.428656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.437069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.445539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.452279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.475255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.484251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:09:33.579479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50938","server-name":"","error":"EOF"}
	
	
	==> etcd [c35a9627a37f91792211628d74fed1b99de1950706ae41ac8fdd805c688534e4] <==
	{"level":"warn","ts":"2025-09-29T13:10:43.783577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.791097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.797789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.806887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.810529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.816678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.822751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.829721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.836447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.848155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.851412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.858374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.874329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:10:43.913031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46080","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T13:20:43.442931Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1035}
	{"level":"info","ts":"2025-09-29T13:20:43.461105Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1035,"took":"17.797347ms","hash":2732142094,"current-db-size-bytes":3268608,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1294336,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2025-09-29T13:20:43.461178Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2732142094,"revision":1035,"compact-revision":-1}
	{"level":"info","ts":"2025-09-29T13:21:33.681796Z","caller":"traceutil/trace.go:172","msg":"trace[1991045421] transaction","detail":"{read_only:false; response_revision:1339; number_of_response:1; }","duration":"134.100388ms","start":"2025-09-29T13:21:33.547674Z","end":"2025-09-29T13:21:33.681775Z","steps":["trace[1991045421] 'process raft request'  (duration: 133.958264ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T13:25:43.447756Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1293}
	{"level":"info","ts":"2025-09-29T13:25:43.450236Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1293,"took":"2.071965ms","hash":3670211248,"current-db-size-bytes":3268608,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1806336,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-09-29T13:25:43.450270Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3670211248,"revision":1293,"compact-revision":1035}
	{"level":"info","ts":"2025-09-29T13:28:23.411038Z","caller":"traceutil/trace.go:172","msg":"trace[1751033392] linearizableReadLoop","detail":"{readStateIndex:1944; appliedIndex:1944; }","duration":"190.896909ms","start":"2025-09-29T13:28:23.220111Z","end":"2025-09-29T13:28:23.411008Z","steps":["trace[1751033392] 'read index received'  (duration: 190.881668ms)","trace[1751033392] 'applied index is now lower than readState.Index'  (duration: 14.074µs)"],"step_count":2}
	{"level":"info","ts":"2025-09-29T13:28:23.411158Z","caller":"traceutil/trace.go:172","msg":"trace[1935250354] transaction","detail":"{read_only:false; response_revision:1689; number_of_response:1; }","duration":"196.767543ms","start":"2025-09-29T13:28:23.214375Z","end":"2025-09-29T13:28:23.411142Z","steps":["trace[1935250354] 'process raft request'  (duration: 196.654579ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T13:28:23.411287Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"191.136007ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T13:28:23.411373Z","caller":"traceutil/trace.go:172","msg":"trace[1590002791] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices; range_end:; response_count:0; response_revision:1689; }","duration":"191.256129ms","start":"2025-09-29T13:28:23.220103Z","end":"2025-09-29T13:28:23.411359Z","steps":["trace[1590002791] 'agreement among raft nodes before linearized reading'  (duration: 190.996951ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:29:28 up  6:11,  0 users,  load average: 1.10, 0.75, 1.11
	Linux no-preload-554589 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [e580b0127dfd6baba394a2d349cfec396ad4c0daff157eb4476e0a101957aff9] <==
	I0929 13:27:26.187014       1 main.go:301] handling current node
	I0929 13:27:36.185053       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:27:36.185098       1 main.go:301] handling current node
	I0929 13:27:46.179043       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:27:46.179086       1 main.go:301] handling current node
	I0929 13:27:56.186295       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:27:56.186327       1 main.go:301] handling current node
	I0929 13:28:06.180198       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:28:06.180243       1 main.go:301] handling current node
	I0929 13:28:16.185045       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:28:16.185091       1 main.go:301] handling current node
	I0929 13:28:26.186081       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:28:26.186117       1 main.go:301] handling current node
	I0929 13:28:36.180074       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:28:36.180106       1 main.go:301] handling current node
	I0929 13:28:46.179131       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:28:46.179166       1 main.go:301] handling current node
	I0929 13:28:56.187148       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:28:56.187211       1 main.go:301] handling current node
	I0929 13:29:06.180843       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:29:06.180880       1 main.go:301] handling current node
	I0929 13:29:16.177995       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:29:16.178216       1 main.go:301] handling current node
	I0929 13:29:26.186099       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:29:26.186143       1 main.go:301] handling current node
	
	
	==> kindnet [fe92b189cf883cbe93d9474127d870f453d75c020b22114de99123f9f623f3a1] <==
	I0929 13:09:46.686685       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0929 13:09:46.687042       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I0929 13:09:46.687232       1 main.go:148] setting mtu 1500 for CNI 
	I0929 13:09:46.687257       1 main.go:178] kindnetd IP family: "ipv4"
	I0929 13:09:46.687290       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-29T13:09:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0929 13:09:46.914300       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0929 13:09:46.914395       1 controller.go:381] "Waiting for informer caches to sync"
	I0929 13:09:46.914408       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0929 13:09:46.914610       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0929 13:09:47.385272       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0929 13:09:47.385308       1 metrics.go:72] Registering metrics
	I0929 13:09:47.385386       1 controller.go:711] "Syncing nftables rules"
	I0929 13:09:56.921074       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:09:56.921130       1 main.go:301] handling current node
	I0929 13:10:06.915171       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:10:06.915233       1 main.go:301] handling current node
	I0929 13:10:16.914755       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 13:10:16.914784       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3e59ee92e127e9ebe23e71830eaec1c6942debeff812ea825dca6bd1ca6af1b8] <==
	I0929 13:09:36.929792       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0929 13:09:40.730637       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 13:09:40.733931       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 13:09:41.427817       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0929 13:09:41.627484       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E0929 13:10:22.839275       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:42886: use of closed network connection
	I0929 13:10:23.547362       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0929 13:10:23.551941       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:10:23.552021       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0929 13:10:23.552077       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0929 13:10:23.625958       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.106.169.232"}
	W0929 13:10:23.636055       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:10:23.636115       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0929 13:10:23.639080       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	W0929 13:10:23.643274       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:10:23.643354       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-apiserver [447139fa2a8378fdb2b0fca317fd4afff7300bf48e07be8ab8398df6eb02b3c9] <==
	I0929 13:25:46.042004       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:26:34.015705       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 13:26:45.342872       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:26:45.342920       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 13:26:45.342934       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 13:26:45.343024       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:26:45.343078       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 13:26:45.345026       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:26:57.335210       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:27:47.722348       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:28:03.041552       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 13:28:45.344023       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:28:45.344079       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 13:28:45.344104       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 13:28:45.346212       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:28:45.346305       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 13:28:45.346330       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:28:49.964496       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:29:26.603615       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [448fabba6fe89ac66791993182ef471d034e865da39b82ac763c5f6f70777c96] <==
	I0929 13:09:40.725016       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 13:09:40.725028       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 13:09:40.725001       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0929 13:09:40.725365       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0929 13:09:40.726074       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 13:09:40.726083       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 13:09:40.726107       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0929 13:09:40.726364       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 13:09:40.726474       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 13:09:40.727016       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0929 13:09:40.727037       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 13:09:40.727057       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 13:09:40.727081       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 13:09:40.728475       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 13:09:40.728479       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 13:09:40.728508       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 13:09:40.729640       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0929 13:09:40.730530       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0929 13:09:40.730600       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0929 13:09:40.730652       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0929 13:09:40.730659       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0929 13:09:40.730666       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0929 13:09:40.733918       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 13:09:40.736172       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-554589" podCIDRs=["10.244.0.0/24"]
	I0929 13:09:40.746491       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [b47e2d7f81b4b23dbebd2a39fe3aa25f3a12719fdc16d113d1eafbbff29cc7d8] <==
	I0929 13:23:18.961351       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:23:48.878764       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:23:48.969759       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:24:18.883484       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:24:18.980670       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:24:48.888113       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:24:48.987321       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:25:18.892849       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:25:18.994468       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:25:48.897677       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:25:49.001757       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:26:18.902240       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:26:19.008923       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:26:48.906008       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:26:49.015396       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:27:18.910340       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:27:19.023084       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:27:48.915059       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:27:49.029759       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:28:18.919499       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:28:19.038081       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:28:48.923489       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:28:49.044662       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:29:18.927394       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:29:19.051205       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [5e80633031c5e83ef8f89d63ef7c4799609c4e066ed6fbc2f242b8eef8ffd994] <==
	I0929 13:10:45.408593       1 server_linux.go:53] "Using iptables proxy"
	I0929 13:10:45.473373       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 13:10:45.573780       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 13:10:45.573814       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E0929 13:10:45.573890       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 13:10:45.686115       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 13:10:45.686199       1 server_linux.go:132] "Using iptables Proxier"
	I0929 13:10:45.692415       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 13:10:45.692912       1 server.go:527] "Version info" version="v1.34.0"
	I0929 13:10:45.692945       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:10:45.694484       1 config.go:200] "Starting service config controller"
	I0929 13:10:45.694489       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 13:10:45.694529       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 13:10:45.694524       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 13:10:45.694560       1 config.go:106] "Starting endpoint slice config controller"
	I0929 13:10:45.694568       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 13:10:45.694596       1 config.go:309] "Starting node config controller"
	I0929 13:10:45.694610       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 13:10:45.794690       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 13:10:45.794691       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 13:10:45.794702       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 13:10:45.794732       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [86d180d3fafecd80e755e727e2f50ad02bd1ea0707d33e41b1e2c298740f82b2] <==
	I0929 13:09:42.573617       1 server_linux.go:53] "Using iptables proxy"
	I0929 13:09:42.626997       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 13:09:42.727892       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 13:09:42.727933       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E0929 13:09:42.728045       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 13:09:42.751349       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 13:09:42.751405       1 server_linux.go:132] "Using iptables Proxier"
	I0929 13:09:42.757984       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 13:09:42.758424       1 server.go:527] "Version info" version="v1.34.0"
	I0929 13:09:42.758467       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:09:42.760131       1 config.go:309] "Starting node config controller"
	I0929 13:09:42.760172       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 13:09:42.760236       1 config.go:200] "Starting service config controller"
	I0929 13:09:42.760312       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 13:09:42.760413       1 config.go:106] "Starting endpoint slice config controller"
	I0929 13:09:42.760424       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 13:09:42.760439       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 13:09:42.760454       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 13:09:42.860556       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 13:09:42.860585       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 13:09:42.860612       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 13:09:42.860761       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [68115be5aa4bea7e23ff40644b8a834960c63546ff756e90d8e8c11fa12e3f88] <==
	I0929 13:10:43.560778       1 serving.go:386] Generated self-signed cert in-memory
	I0929 13:10:44.375547       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 13:10:44.375650       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:10:44.382626       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 13:10:44.382737       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0929 13:10:44.382759       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0929 13:10:44.382786       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 13:10:44.383044       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:10:44.383070       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:10:44.383278       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 13:10:44.383298       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 13:10:44.483900       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:10:44.483861       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0929 13:10:44.483859       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [f157b54ee5632361a5614f30127b6f5dfc89ff0daa05de53a9f5257c9ebec23a] <==
	E0929 13:09:34.291886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 13:09:34.292306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 13:09:34.292383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 13:09:34.292484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 13:09:34.292547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 13:09:34.292603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 13:09:34.292666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 13:09:34.292724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 13:09:34.292912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 13:09:34.295117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 13:09:34.295317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 13:09:34.297948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 13:09:34.298379       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 13:09:34.299032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 13:09:34.299164       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 13:09:35.104119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 13:09:35.152292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 13:09:35.161472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 13:09:35.199931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 13:09:35.211848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 13:09:35.250936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 13:09:35.282421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 13:09:35.405376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 13:09:35.436484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I0929 13:09:35.883539       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 13:28:10 no-preload-554589 kubelet[602]: E0929 13:28:10.825506     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-45phl" podUID="638c53d3-4825-4387-bb3a-56dd0be70464"
	Sep 29 13:28:18 no-preload-554589 kubelet[602]: E0929 13:28:18.825364     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-95jmk" podUID="010dbb38-5dfe-41e9-a655-0c6d4115135a"
	Sep 29 13:28:21 no-preload-554589 kubelet[602]: I0929 13:28:21.825125     602 scope.go:117] "RemoveContainer" containerID="18757e8aef15661653386741c20ea27040e66297362881bcfffa6202668603e1"
	Sep 29 13:28:21 no-preload-554589 kubelet[602]: E0929 13:28:21.825282     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mrrdc_kubernetes-dashboard(e9a2588b-38cc-46f8-9d2b-df6e430f476e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrrdc" podUID="e9a2588b-38cc-46f8-9d2b-df6e430f476e"
	Sep 29 13:28:22 no-preload-554589 kubelet[602]: E0929 13:28:22.825756     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-45phl" podUID="638c53d3-4825-4387-bb3a-56dd0be70464"
	Sep 29 13:28:33 no-preload-554589 kubelet[602]: I0929 13:28:33.824070     602 scope.go:117] "RemoveContainer" containerID="18757e8aef15661653386741c20ea27040e66297362881bcfffa6202668603e1"
	Sep 29 13:28:33 no-preload-554589 kubelet[602]: E0929 13:28:33.824279     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mrrdc_kubernetes-dashboard(e9a2588b-38cc-46f8-9d2b-df6e430f476e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrrdc" podUID="e9a2588b-38cc-46f8-9d2b-df6e430f476e"
	Sep 29 13:28:33 no-preload-554589 kubelet[602]: E0929 13:28:33.824739     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-95jmk" podUID="010dbb38-5dfe-41e9-a655-0c6d4115135a"
	Sep 29 13:28:35 no-preload-554589 kubelet[602]: E0929 13:28:35.824831     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-45phl" podUID="638c53d3-4825-4387-bb3a-56dd0be70464"
	Sep 29 13:28:44 no-preload-554589 kubelet[602]: E0929 13:28:44.825664     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-95jmk" podUID="010dbb38-5dfe-41e9-a655-0c6d4115135a"
	Sep 29 13:28:46 no-preload-554589 kubelet[602]: E0929 13:28:46.824778     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-45phl" podUID="638c53d3-4825-4387-bb3a-56dd0be70464"
	Sep 29 13:28:47 no-preload-554589 kubelet[602]: I0929 13:28:47.823860     602 scope.go:117] "RemoveContainer" containerID="18757e8aef15661653386741c20ea27040e66297362881bcfffa6202668603e1"
	Sep 29 13:28:47 no-preload-554589 kubelet[602]: E0929 13:28:47.824050     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mrrdc_kubernetes-dashboard(e9a2588b-38cc-46f8-9d2b-df6e430f476e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrrdc" podUID="e9a2588b-38cc-46f8-9d2b-df6e430f476e"
	Sep 29 13:28:57 no-preload-554589 kubelet[602]: E0929 13:28:57.825321     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-45phl" podUID="638c53d3-4825-4387-bb3a-56dd0be70464"
	Sep 29 13:28:58 no-preload-554589 kubelet[602]: I0929 13:28:58.824340     602 scope.go:117] "RemoveContainer" containerID="18757e8aef15661653386741c20ea27040e66297362881bcfffa6202668603e1"
	Sep 29 13:28:58 no-preload-554589 kubelet[602]: E0929 13:28:58.824542     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mrrdc_kubernetes-dashboard(e9a2588b-38cc-46f8-9d2b-df6e430f476e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrrdc" podUID="e9a2588b-38cc-46f8-9d2b-df6e430f476e"
	Sep 29 13:28:59 no-preload-554589 kubelet[602]: E0929 13:28:59.825075     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-95jmk" podUID="010dbb38-5dfe-41e9-a655-0c6d4115135a"
	Sep 29 13:29:10 no-preload-554589 kubelet[602]: I0929 13:29:10.824742     602 scope.go:117] "RemoveContainer" containerID="18757e8aef15661653386741c20ea27040e66297362881bcfffa6202668603e1"
	Sep 29 13:29:10 no-preload-554589 kubelet[602]: E0929 13:29:10.824900     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mrrdc_kubernetes-dashboard(e9a2588b-38cc-46f8-9d2b-df6e430f476e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrrdc" podUID="e9a2588b-38cc-46f8-9d2b-df6e430f476e"
	Sep 29 13:29:10 no-preload-554589 kubelet[602]: E0929 13:29:10.825425     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-45phl" podUID="638c53d3-4825-4387-bb3a-56dd0be70464"
	Sep 29 13:29:12 no-preload-554589 kubelet[602]: E0929 13:29:12.824879     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-95jmk" podUID="010dbb38-5dfe-41e9-a655-0c6d4115135a"
	Sep 29 13:29:21 no-preload-554589 kubelet[602]: I0929 13:29:21.825230     602 scope.go:117] "RemoveContainer" containerID="18757e8aef15661653386741c20ea27040e66297362881bcfffa6202668603e1"
	Sep 29 13:29:21 no-preload-554589 kubelet[602]: E0929 13:29:21.825410     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mrrdc_kubernetes-dashboard(e9a2588b-38cc-46f8-9d2b-df6e430f476e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrrdc" podUID="e9a2588b-38cc-46f8-9d2b-df6e430f476e"
	Sep 29 13:29:21 no-preload-554589 kubelet[602]: E0929 13:29:21.825931     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-45phl" podUID="638c53d3-4825-4387-bb3a-56dd0be70464"
	Sep 29 13:29:26 no-preload-554589 kubelet[602]: E0929 13:29:26.824667     602 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-95jmk" podUID="010dbb38-5dfe-41e9-a655-0c6d4115135a"
	
	
	==> storage-provisioner [83ac731714bb3c23ec7e41e1d1f4691e6cd1622fc3f74c254590501650ee838a] <==
	I0929 13:10:45.370127       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 13:11:15.372444       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fd251c5d0e78412012c06260873e50df4a3259111f8b772b29a8ce5864a7925c] <==
	W0929 13:29:03.576229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:05.579580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:05.583144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:07.585647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:07.590065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:09.592625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:09.596098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:11.598901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:11.602574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:13.605455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:13.610272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:15.613361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:15.617242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:17.621062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:17.625048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:19.628164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:19.632098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:21.634933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:21.639691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:23.642382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:23.646630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:25.649719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:25.653434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:27.657097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:29:27.661782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-554589 -n no-preload-554589
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-554589 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-45phl kubernetes-dashboard-855c9754f9-95jmk
helpers_test.go:282: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context no-preload-554589 describe pod metrics-server-746fcd58dc-45phl kubernetes-dashboard-855c9754f9-95jmk
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-554589 describe pod metrics-server-746fcd58dc-45phl kubernetes-dashboard-855c9754f9-95jmk: exit status 1 (60.993166ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-45phl" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-95jmk" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context no-preload-554589 describe pod metrics-server-746fcd58dc-45phl kubernetes-dashboard-855c9754f9-95jmk: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (542.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-djsjk" [23e361a9-ad69-4d5f-a704-eac5d4a77060] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0929 13:23:40.996211 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/bridge-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:25:20.683809 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:25:28.892184 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/auto-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:25:39.707082 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:25:58.708365 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/kindnet-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:26:43.751811 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/custom-flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:26:51.955906 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/auto-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:27:08.286422 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/enable-default-cni-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:27:21.771167 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/kindnet-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:27:59.086611 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:28:06.814759 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/custom-flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-625526 -n default-k8s-diff-port-625526
start_stop_delete_test.go:272: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-29 13:32:21.54597779 +0000 UTC m=+4506.130000982
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-625526 describe po kubernetes-dashboard-855c9754f9-djsjk -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context default-k8s-diff-port-625526 describe po kubernetes-dashboard-855c9754f9-djsjk -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-djsjk
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             default-k8s-diff-port-625526/192.168.76.2
Start Time:       Mon, 29 Sep 2025 13:22:49 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9dpcp (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-9dpcp:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m32s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-djsjk to default-k8s-diff-port-625526
Normal   Pulling    6m29s (x5 over 9m32s)   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     6m26s (x5 over 9m29s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m26s (x5 over 9m29s)   kubelet            Error: ErrImagePull
Warning  Failed     4m27s (x19 over 9m29s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m2s (x21 over 9m29s)   kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-625526 logs kubernetes-dashboard-855c9754f9-djsjk -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-625526 logs kubernetes-dashboard-855c9754f9-djsjk -n kubernetes-dashboard: exit status 1 (70.257539ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-djsjk" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context default-k8s-diff-port-625526 logs kubernetes-dashboard-855c9754f9-djsjk -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-625526
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-625526:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f29ba4b4f0d0ad7bb726ccd78a921114fa0e02f8993408c4c070cf20b6077fdd",
	        "Created": "2025-09-29T13:21:35.535015835Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1452163,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T13:22:35.790994729Z",
	            "FinishedAt": "2025-09-29T13:22:34.971046468Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/f29ba4b4f0d0ad7bb726ccd78a921114fa0e02f8993408c4c070cf20b6077fdd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f29ba4b4f0d0ad7bb726ccd78a921114fa0e02f8993408c4c070cf20b6077fdd/hostname",
	        "HostsPath": "/var/lib/docker/containers/f29ba4b4f0d0ad7bb726ccd78a921114fa0e02f8993408c4c070cf20b6077fdd/hosts",
	        "LogPath": "/var/lib/docker/containers/f29ba4b4f0d0ad7bb726ccd78a921114fa0e02f8993408c4c070cf20b6077fdd/f29ba4b4f0d0ad7bb726ccd78a921114fa0e02f8993408c4c070cf20b6077fdd-json.log",
	        "Name": "/default-k8s-diff-port-625526",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-625526:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-625526",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f29ba4b4f0d0ad7bb726ccd78a921114fa0e02f8993408c4c070cf20b6077fdd",
	                "LowerDir": "/var/lib/docker/overlay2/0b283983875cfbfb907bf7aa11f0491151097b490089be747dc9b0f850143a32-init/diff:/var/lib/docker/overlay2/fbd0ff8837aea1062458ef3b6c2ff01f7caaf77470820d108a1f7ca188c98aa7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0b283983875cfbfb907bf7aa11f0491151097b490089be747dc9b0f850143a32/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0b283983875cfbfb907bf7aa11f0491151097b490089be747dc9b0f850143a32/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0b283983875cfbfb907bf7aa11f0491151097b490089be747dc9b0f850143a32/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-625526",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-625526/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-625526",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-625526",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-625526",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2fc799ac60e7124fe1e2078773ec3fd609a42a5411bf191b1342269da1d9aad0",
	            "SandboxKey": "/var/run/docker/netns/2fc799ac60e7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33621"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33622"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33625"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33623"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33624"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-625526": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:0f:c7:b6:37:f4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "95f7e8c854149b1628032e2a72d3bec2183e183d410a56fe3a422f2b1aab16f1",
	                    "EndpointID": "5065cae664fda66f29808766dc7549c83c8f90c3d1fde40cf38782045bed9c4c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-625526",
	                        "f29ba4b4f0d0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-625526 -n default-k8s-diff-port-625526
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-625526 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-625526 logs -n 25: (1.508673281s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ old-k8s-version-495121 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-495121 │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ pause   │ -p old-k8s-version-495121 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-495121 │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ unpause │ -p old-k8s-version-495121 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-495121 │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ delete  │ -p old-k8s-version-495121                                                                                                                                                                                                                           │ old-k8s-version-495121 │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ delete  │ -p old-k8s-version-495121                                                                                                                                                                                                                           │ old-k8s-version-495121 │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ start   │ -p newest-cni-740698 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0 │ newest-cni-740698      │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ image   │ embed-certs-644246 image list --format=json                                                                                                                                                                                                         │ embed-certs-644246     │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ pause   │ -p embed-certs-644246 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-644246     │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ unpause │ -p embed-certs-644246 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-644246     │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ delete  │ -p embed-certs-644246                                                                                                                                                                                                                               │ embed-certs-644246     │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ delete  │ -p embed-certs-644246                                                                                                                                                                                                                               │ embed-certs-644246     │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ addons  │ enable metrics-server -p newest-cni-740698 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-740698      │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ stop    │ -p newest-cni-740698 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-740698      │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ addons  │ enable dashboard -p newest-cni-740698 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-740698      │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ start   │ -p newest-cni-740698 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0 │ newest-cni-740698      │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ image   │ newest-cni-740698 image list --format=json                                                                                                                                                                                                          │ newest-cni-740698      │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ pause   │ -p newest-cni-740698 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-740698      │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ unpause │ -p newest-cni-740698 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-740698      │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:29 UTC │
	│ delete  │ -p newest-cni-740698                                                                                                                                                                                                                                │ newest-cni-740698      │ jenkins │ v1.37.0 │ 29 Sep 25 13:29 UTC │ 29 Sep 25 13:29 UTC │
	│ delete  │ -p newest-cni-740698                                                                                                                                                                                                                                │ newest-cni-740698      │ jenkins │ v1.37.0 │ 29 Sep 25 13:29 UTC │ 29 Sep 25 13:29 UTC │
	│ image   │ no-preload-554589 image list --format=json                                                                                                                                                                                                          │ no-preload-554589      │ jenkins │ v1.37.0 │ 29 Sep 25 13:29 UTC │ 29 Sep 25 13:29 UTC │
	│ pause   │ -p no-preload-554589 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-554589      │ jenkins │ v1.37.0 │ 29 Sep 25 13:29 UTC │ 29 Sep 25 13:29 UTC │
	│ unpause │ -p no-preload-554589 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-554589      │ jenkins │ v1.37.0 │ 29 Sep 25 13:29 UTC │ 29 Sep 25 13:29 UTC │
	│ delete  │ -p no-preload-554589                                                                                                                                                                                                                                │ no-preload-554589      │ jenkins │ v1.37.0 │ 29 Sep 25 13:29 UTC │ 29 Sep 25 13:29 UTC │
	│ delete  │ -p no-preload-554589                                                                                                                                                                                                                                │ no-preload-554589      │ jenkins │ v1.37.0 │ 29 Sep 25 13:29 UTC │ 29 Sep 25 13:29 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 13:28:47
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 13:28:47.629010 1465110 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:28:47.629105 1465110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:28:47.629112 1465110 out.go:374] Setting ErrFile to fd 2...
	I0929 13:28:47.629116 1465110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:28:47.629362 1465110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1097891/.minikube/bin
	I0929 13:28:47.629827 1465110 out.go:368] Setting JSON to false
	I0929 13:28:47.631050 1465110 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":22265,"bootTime":1759130263,"procs":282,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 13:28:47.631151 1465110 start.go:140] virtualization: kvm guest
	I0929 13:28:47.632789 1465110 out.go:179] * [newest-cni-740698] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 13:28:47.633869 1465110 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 13:28:47.633868 1465110 notify.go:220] Checking for updates...
	I0929 13:28:47.635694 1465110 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 13:28:47.636760 1465110 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 13:28:47.637754 1465110 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1097891/.minikube
	I0929 13:28:47.638765 1465110 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 13:28:47.639953 1465110 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 13:28:47.641486 1465110 config.go:182] Loaded profile config "newest-cni-740698": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 13:28:47.642019 1465110 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 13:28:47.665832 1465110 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 13:28:47.665953 1465110 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:28:47.723794 1465110 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-29 13:28:47.714011734 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:28:47.723910 1465110 docker.go:318] overlay module found
	I0929 13:28:47.725515 1465110 out.go:179] * Using the docker driver based on existing profile
	I0929 13:28:47.726501 1465110 start.go:304] selected driver: docker
	I0929 13:28:47.726514 1465110 start.go:924] validating driver "docker" against &{Name:newest-cni-740698 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-740698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:28:47.726592 1465110 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 13:28:47.727137 1465110 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:28:47.786604 1465110 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-29 13:28:47.775360933 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:28:47.786935 1465110 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0929 13:28:47.786994 1465110 cni.go:84] Creating CNI manager for ""
	I0929 13:28:47.787058 1465110 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0929 13:28:47.787138 1465110 start.go:348] cluster config:
	{Name:newest-cni-740698 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-740698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:28:47.788845 1465110 out.go:179] * Starting "newest-cni-740698" primary control-plane node in "newest-cni-740698" cluster
	I0929 13:28:47.789698 1465110 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0929 13:28:47.790619 1465110 out.go:179] * Pulling base image v0.0.48 ...
	I0929 13:28:47.791466 1465110 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0929 13:28:47.791515 1465110 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0929 13:28:47.791538 1465110 cache.go:58] Caching tarball of preloaded images
	I0929 13:28:47.791581 1465110 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 13:28:47.791656 1465110 preload.go:172] Found /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0929 13:28:47.791668 1465110 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0929 13:28:47.791790 1465110 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698/config.json ...
	I0929 13:28:47.814442 1465110 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 13:28:47.814461 1465110 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 13:28:47.814477 1465110 cache.go:232] Successfully downloaded all kic artifacts
	I0929 13:28:47.814502 1465110 start.go:360] acquireMachinesLock for newest-cni-740698: {Name:mkf40a81be102ef43d2455f2435b32c6c1c894a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:28:47.814571 1465110 start.go:364] duration metric: took 41.549µs to acquireMachinesLock for "newest-cni-740698"
	I0929 13:28:47.814589 1465110 start.go:96] Skipping create...Using existing machine configuration
	I0929 13:28:47.814597 1465110 fix.go:54] fixHost starting: 
	I0929 13:28:47.814799 1465110 cli_runner.go:164] Run: docker container inspect newest-cni-740698 --format={{.State.Status}}
	I0929 13:28:47.833656 1465110 fix.go:112] recreateIfNeeded on newest-cni-740698: state=Stopped err=<nil>
	W0929 13:28:47.833696 1465110 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 13:28:47.834947 1465110 out.go:252] * Restarting existing docker container for "newest-cni-740698" ...
	I0929 13:28:47.835039 1465110 cli_runner.go:164] Run: docker start newest-cni-740698
	I0929 13:28:48.076859 1465110 cli_runner.go:164] Run: docker container inspect newest-cni-740698 --format={{.State.Status}}
	I0929 13:28:48.095440 1465110 kic.go:430] container "newest-cni-740698" state is running.
	I0929 13:28:48.095808 1465110 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-740698
	I0929 13:28:48.114058 1465110 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698/config.json ...
	I0929 13:28:48.114307 1465110 machine.go:93] provisionDockerMachine start ...
	I0929 13:28:48.114405 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:48.133581 1465110 main.go:141] libmachine: Using SSH client type: native
	I0929 13:28:48.133843 1465110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33631 <nil> <nil>}
	I0929 13:28:48.133858 1465110 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 13:28:48.134500 1465110 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46200->127.0.0.1:33631: read: connection reset by peer
	I0929 13:28:51.270952 1465110 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-740698
	
	I0929 13:28:51.270992 1465110 ubuntu.go:182] provisioning hostname "newest-cni-740698"
	I0929 13:28:51.271069 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:51.289296 1465110 main.go:141] libmachine: Using SSH client type: native
	I0929 13:28:51.289545 1465110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33631 <nil> <nil>}
	I0929 13:28:51.289560 1465110 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-740698 && echo "newest-cni-740698" | sudo tee /etc/hostname
	I0929 13:28:51.438761 1465110 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-740698
	
	I0929 13:28:51.438840 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:51.457877 1465110 main.go:141] libmachine: Using SSH client type: native
	I0929 13:28:51.458135 1465110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33631 <nil> <nil>}
	I0929 13:28:51.458154 1465110 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-740698' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-740698/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-740698' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 13:28:51.593410 1465110 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 13:28:51.593449 1465110 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-1097891/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-1097891/.minikube}
	I0929 13:28:51.593481 1465110 ubuntu.go:190] setting up certificates
	I0929 13:28:51.593495 1465110 provision.go:84] configureAuth start
	I0929 13:28:51.593550 1465110 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-740698
	I0929 13:28:51.611525 1465110 provision.go:143] copyHostCerts
	I0929 13:28:51.611591 1465110 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.pem, removing ...
	I0929 13:28:51.611615 1465110 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.pem
	I0929 13:28:51.611700 1465110 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.pem (1078 bytes)
	I0929 13:28:51.611825 1465110 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1097891/.minikube/cert.pem, removing ...
	I0929 13:28:51.611837 1465110 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1097891/.minikube/cert.pem
	I0929 13:28:51.611881 1465110 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-1097891/.minikube/cert.pem (1123 bytes)
	I0929 13:28:51.611991 1465110 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1097891/.minikube/key.pem, removing ...
	I0929 13:28:51.612001 1465110 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1097891/.minikube/key.pem
	I0929 13:28:51.612053 1465110 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-1097891/.minikube/key.pem (1679 bytes)
	I0929 13:28:51.612145 1465110 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca-key.pem org=jenkins.newest-cni-740698 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-740698]
	I0929 13:28:51.883873 1465110 provision.go:177] copyRemoteCerts
	I0929 13:28:51.883933 1465110 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 13:28:51.883991 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:51.903374 1465110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33631 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/newest-cni-740698/id_rsa Username:docker}
	I0929 13:28:52.001398 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 13:28:52.027859 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 13:28:52.052634 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0929 13:28:52.076437 1465110 provision.go:87] duration metric: took 482.92934ms to configureAuth
	I0929 13:28:52.076472 1465110 ubuntu.go:206] setting minikube options for container-runtime
	I0929 13:28:52.076652 1465110 config.go:182] Loaded profile config "newest-cni-740698": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 13:28:52.076664 1465110 machine.go:96] duration metric: took 3.962343403s to provisionDockerMachine
	I0929 13:28:52.076673 1465110 start.go:293] postStartSetup for "newest-cni-740698" (driver="docker")
	I0929 13:28:52.076684 1465110 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 13:28:52.076733 1465110 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 13:28:52.076772 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:52.094150 1465110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33631 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/newest-cni-740698/id_rsa Username:docker}
	I0929 13:28:52.191088 1465110 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 13:28:52.194641 1465110 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 13:28:52.194668 1465110 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 13:28:52.194676 1465110 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 13:28:52.194684 1465110 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 13:28:52.194695 1465110 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1097891/.minikube/addons for local assets ...
	I0929 13:28:52.194737 1465110 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1097891/.minikube/files for local assets ...
	I0929 13:28:52.194818 1465110 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem -> 11014942.pem in /etc/ssl/certs
	I0929 13:28:52.194917 1465110 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 13:28:52.204323 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem --> /etc/ssl/certs/11014942.pem (1708 bytes)
	I0929 13:28:52.230005 1465110 start.go:296] duration metric: took 153.302822ms for postStartSetup
	I0929 13:28:52.230084 1465110 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 13:28:52.230135 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:52.248054 1465110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33631 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/newest-cni-740698/id_rsa Username:docker}
	I0929 13:28:52.342555 1465110 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 13:28:52.347137 1465110 fix.go:56] duration metric: took 4.532532077s for fixHost
	I0929 13:28:52.347165 1465110 start.go:83] releasing machines lock for "newest-cni-740698", held for 4.532582488s
	I0929 13:28:52.347237 1465110 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-740698
	I0929 13:28:52.364912 1465110 ssh_runner.go:195] Run: cat /version.json
	I0929 13:28:52.364957 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:52.365051 1465110 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 13:28:52.365121 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:52.382974 1465110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33631 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/newest-cni-740698/id_rsa Username:docker}
	I0929 13:28:52.383162 1465110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33631 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/newest-cni-740698/id_rsa Username:docker}
	I0929 13:28:52.554416 1465110 ssh_runner.go:195] Run: systemctl --version
	I0929 13:28:52.559399 1465110 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 13:28:52.563991 1465110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 13:28:52.583272 1465110 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 13:28:52.583349 1465110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 13:28:52.592814 1465110 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 13:28:52.592837 1465110 start.go:495] detecting cgroup driver to use...
	I0929 13:28:52.592867 1465110 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 13:28:52.592905 1465110 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0929 13:28:52.606487 1465110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 13:28:52.618517 1465110 docker.go:218] disabling cri-docker service (if available) ...
	I0929 13:28:52.618560 1465110 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 13:28:52.631757 1465110 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 13:28:52.644305 1465110 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 13:28:52.709227 1465110 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 13:28:52.775137 1465110 docker.go:234] disabling docker service ...
	I0929 13:28:52.775221 1465110 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 13:28:52.788059 1465110 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 13:28:52.799783 1465110 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 13:28:52.864439 1465110 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 13:28:52.929637 1465110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 13:28:52.941537 1465110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 13:28:52.958075 1465110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 13:28:52.968107 1465110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 13:28:52.978062 1465110 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0929 13:28:52.978121 1465110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0929 13:28:52.988006 1465110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 13:28:52.997660 1465110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 13:28:53.007646 1465110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 13:28:53.017544 1465110 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 13:28:53.026981 1465110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 13:28:53.037048 1465110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 13:28:53.047149 1465110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 13:28:53.057156 1465110 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 13:28:53.065634 1465110 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 13:28:53.074061 1465110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:28:53.138360 1465110 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 13:28:53.242303 1465110 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0929 13:28:53.242387 1465110 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0929 13:28:53.246564 1465110 start.go:563] Will wait 60s for crictl version
	I0929 13:28:53.246638 1465110 ssh_runner.go:195] Run: which crictl
	I0929 13:28:53.250428 1465110 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 13:28:53.285386 1465110 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0929 13:28:53.285461 1465110 ssh_runner.go:195] Run: containerd --version
	I0929 13:28:53.311246 1465110 ssh_runner.go:195] Run: containerd --version
	I0929 13:28:53.337354 1465110 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0929 13:28:53.338401 1465110 cli_runner.go:164] Run: docker network inspect newest-cni-740698 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:28:53.355730 1465110 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0929 13:28:53.360006 1465110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:28:53.373781 1465110 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0929 13:28:53.374806 1465110 kubeadm.go:875] updating cluster {Name:newest-cni-740698 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-740698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 13:28:53.374940 1465110 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0929 13:28:53.375027 1465110 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 13:28:53.409703 1465110 containerd.go:627] all images are preloaded for containerd runtime.
	I0929 13:28:53.409723 1465110 containerd.go:534] Images already preloaded, skipping extraction
	I0929 13:28:53.409781 1465110 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 13:28:53.446238 1465110 containerd.go:627] all images are preloaded for containerd runtime.
	I0929 13:28:53.446258 1465110 cache_images.go:85] Images are preloaded, skipping loading
	I0929 13:28:53.446266 1465110 kubeadm.go:926] updating node { 192.168.85.2 8443 v1.34.0 containerd true true} ...
	I0929 13:28:53.446366 1465110 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-740698 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:newest-cni-740698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 13:28:53.446423 1465110 ssh_runner.go:195] Run: sudo crictl info
	I0929 13:28:53.482332 1465110 cni.go:84] Creating CNI manager for ""
	I0929 13:28:53.482352 1465110 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0929 13:28:53.482361 1465110 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0929 13:28:53.482383 1465110 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-740698 NodeName:newest-cni-740698 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 13:28:53.482515 1465110 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-740698"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 13:28:53.482573 1465110 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 13:28:53.492790 1465110 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 13:28:53.492848 1465110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 13:28:53.502450 1465110 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0929 13:28:53.520767 1465110 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 13:28:53.541161 1465110 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I0929 13:28:53.559905 1465110 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0929 13:28:53.563697 1465110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:28:53.575325 1465110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:28:53.644353 1465110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:28:53.667523 1465110 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698 for IP: 192.168.85.2
	I0929 13:28:53.667547 1465110 certs.go:194] generating shared ca certs ...
	I0929 13:28:53.667566 1465110 certs.go:226] acquiring lock for ca certs: {Name:mk80f04796163f71154dbe6468cabd937b3d9c9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:28:53.667743 1465110 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.key
	I0929 13:28:53.667829 1465110 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/proxy-client-ca.key
	I0929 13:28:53.667849 1465110 certs.go:256] generating profile certs ...
	I0929 13:28:53.667989 1465110 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698/client.key
	I0929 13:28:53.668064 1465110 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698/apiserver.key.abc54583
	I0929 13:28:53.668121 1465110 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698/proxy-client.key
	I0929 13:28:53.668255 1465110 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/1101494.pem (1338 bytes)
	W0929 13:28:53.668287 1465110 certs.go:480] ignoring /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/1101494_empty.pem, impossibly tiny 0 bytes
	I0929 13:28:53.668299 1465110 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 13:28:53.668331 1465110 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem (1078 bytes)
	I0929 13:28:53.668365 1465110 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem (1123 bytes)
	I0929 13:28:53.668397 1465110 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/key.pem (1679 bytes)
	I0929 13:28:53.668454 1465110 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem (1708 bytes)
	I0929 13:28:53.669280 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 13:28:53.696915 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I0929 13:28:53.725095 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 13:28:53.759173 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 13:28:53.787114 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0929 13:28:53.812387 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0929 13:28:53.837455 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 13:28:53.864150 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0929 13:28:53.892727 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem --> /usr/share/ca-certificates/11014942.pem (1708 bytes)
	I0929 13:28:53.918653 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 13:28:53.944563 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/1101494.pem --> /usr/share/ca-certificates/1101494.pem (1338 bytes)
	I0929 13:28:53.970121 1465110 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 13:28:53.988778 1465110 ssh_runner.go:195] Run: openssl version
	I0929 13:28:53.994749 1465110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11014942.pem && ln -fs /usr/share/ca-certificates/11014942.pem /etc/ssl/certs/11014942.pem"
	I0929 13:28:54.004933 1465110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11014942.pem
	I0929 13:28:54.008622 1465110 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 12:23 /usr/share/ca-certificates/11014942.pem
	I0929 13:28:54.008720 1465110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11014942.pem
	I0929 13:28:54.015872 1465110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11014942.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 13:28:54.025519 1465110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 13:28:54.035467 1465110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:28:54.039540 1465110 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 12:18 /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:28:54.039596 1465110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:28:54.047058 1465110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 13:28:54.056922 1465110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1101494.pem && ln -fs /usr/share/ca-certificates/1101494.pem /etc/ssl/certs/1101494.pem"
	I0929 13:28:54.066836 1465110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1101494.pem
	I0929 13:28:54.070330 1465110 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 12:23 /usr/share/ca-certificates/1101494.pem
	I0929 13:28:54.070369 1465110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1101494.pem
	I0929 13:28:54.077657 1465110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1101494.pem /etc/ssl/certs/51391683.0"
	I0929 13:28:54.087032 1465110 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 13:28:54.090728 1465110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 13:28:54.097689 1465110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 13:28:54.104122 1465110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 13:28:54.110565 1465110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 13:28:54.117388 1465110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 13:28:54.123946 1465110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 13:28:54.130630 1465110 kubeadm.go:392] StartCluster: {Name:newest-cni-740698 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-740698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:28:54.130735 1465110 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0929 13:28:54.130798 1465110 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 13:28:54.167363 1465110 cri.go:89] found id: "36fd60ad8b5f43506f08923872ee0aac518a04a1fbe0bd7231ed286722550d61"
	I0929 13:28:54.167386 1465110 cri.go:89] found id: "85a7d209414fd13547d72f843320321f35203a7a91f1d1d949bcbef472b56b42"
	I0929 13:28:54.167389 1465110 cri.go:89] found id: "7ad3eea1da3e1387bfbcdf334ec1f656e07e5923ea432d735dd02eb19cac0365"
	I0929 13:28:54.167392 1465110 cri.go:89] found id: "f52b4638fb9b2c69f79356c1efc63df5afbd181951d1758d972cc553ffbc5dba"
	I0929 13:28:54.167395 1465110 cri.go:89] found id: "9ea775d7aaeeeac021fb1008a123392683e633a680971b8a0c1d0ce312bb1530"
	I0929 13:28:54.167397 1465110 cri.go:89] found id: "e7cf3d47f09c990445369fe3081a61f3b660acc90dc8849ee297eefb91ad2462"
	I0929 13:28:54.167408 1465110 cri.go:89] found id: "00afc0aa272581aa861d734fd4eff7e4d7b47a8a679dd8459df8264c4766bf57"
	I0929 13:28:54.167411 1465110 cri.go:89] found id: ""
	I0929 13:28:54.167452 1465110 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0929 13:28:54.182182 1465110 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-29T13:28:54Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0929 13:28:54.182268 1465110 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 13:28:54.194234 1465110 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 13:28:54.194257 1465110 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 13:28:54.194302 1465110 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 13:28:54.207740 1465110 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 13:28:54.208661 1465110 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-740698" does not appear in /home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 13:28:54.209320 1465110 kubeconfig.go:62] /home/jenkins/minikube-integration/21652-1097891/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-740698" cluster setting kubeconfig missing "newest-cni-740698" context setting]
	I0929 13:28:54.210396 1465110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/kubeconfig: {Name:mk343611c88fd6ad36810bb377f9a0ca463784db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:28:54.213105 1465110 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 13:28:54.227105 1465110 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.85.2
	I0929 13:28:54.227188 1465110 kubeadm.go:593] duration metric: took 32.922621ms to restartPrimaryControlPlane
	I0929 13:28:54.227204 1465110 kubeadm.go:394] duration metric: took 96.582969ms to StartCluster
	I0929 13:28:54.227267 1465110 settings.go:142] acquiring lock: {Name:mk967ab7b412f5ea13a8bdbc3d08e00d0ec4417f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:28:54.227417 1465110 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 13:28:54.229069 1465110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/kubeconfig: {Name:mk343611c88fd6ad36810bb377f9a0ca463784db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:28:54.229359 1465110 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0929 13:28:54.229589 1465110 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 13:28:54.229695 1465110 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-740698"
	I0929 13:28:54.229719 1465110 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-740698"
	I0929 13:28:54.229785 1465110 config.go:182] Loaded profile config "newest-cni-740698": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 13:28:54.229795 1465110 addons.go:69] Setting default-storageclass=true in profile "newest-cni-740698"
	I0929 13:28:54.229812 1465110 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-740698"
	I0929 13:28:54.229847 1465110 addons.go:69] Setting metrics-server=true in profile "newest-cni-740698"
	I0929 13:28:54.229863 1465110 addons.go:238] Setting addon metrics-server=true in "newest-cni-740698"
	W0929 13:28:54.229871 1465110 addons.go:247] addon metrics-server should already be in state true
	I0929 13:28:54.229908 1465110 host.go:66] Checking if "newest-cni-740698" exists ...
	W0929 13:28:54.229922 1465110 addons.go:247] addon storage-provisioner should already be in state true
	I0929 13:28:54.229954 1465110 host.go:66] Checking if "newest-cni-740698" exists ...
	I0929 13:28:54.230018 1465110 addons.go:69] Setting dashboard=true in profile "newest-cni-740698"
	I0929 13:28:54.230040 1465110 addons.go:238] Setting addon dashboard=true in "newest-cni-740698"
	W0929 13:28:54.230059 1465110 addons.go:247] addon dashboard should already be in state true
	I0929 13:28:54.230085 1465110 host.go:66] Checking if "newest-cni-740698" exists ...
	I0929 13:28:54.230479 1465110 cli_runner.go:164] Run: docker container inspect newest-cni-740698 --format={{.State.Status}}
	I0929 13:28:54.230614 1465110 cli_runner.go:164] Run: docker container inspect newest-cni-740698 --format={{.State.Status}}
	I0929 13:28:54.230497 1465110 cli_runner.go:164] Run: docker container inspect newest-cni-740698 --format={{.State.Status}}
	I0929 13:28:54.230854 1465110 cli_runner.go:164] Run: docker container inspect newest-cni-740698 --format={{.State.Status}}
	I0929 13:28:54.232563 1465110 out.go:179] * Verifying Kubernetes components...
	I0929 13:28:54.236717 1465110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:28:54.260047 1465110 addons.go:238] Setting addon default-storageclass=true in "newest-cni-740698"
	W0929 13:28:54.260075 1465110 addons.go:247] addon default-storageclass should already be in state true
	I0929 13:28:54.260109 1465110 host.go:66] Checking if "newest-cni-740698" exists ...
	I0929 13:28:54.260230 1465110 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 13:28:54.260637 1465110 cli_runner.go:164] Run: docker container inspect newest-cni-740698 --format={{.State.Status}}
	I0929 13:28:54.261415 1465110 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 13:28:54.261436 1465110 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 13:28:54.261589 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:54.263389 1465110 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 13:28:54.264722 1465110 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 13:28:54.264809 1465110 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:28:54.264925 1465110 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 13:28:54.265009 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:54.267056 1465110 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 13:28:54.267927 1465110 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 13:28:54.267947 1465110 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 13:28:54.268035 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:54.291623 1465110 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 13:28:54.291658 1465110 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 13:28:54.291740 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:54.296013 1465110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33631 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/newest-cni-740698/id_rsa Username:docker}
	I0929 13:28:54.312876 1465110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33631 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/newest-cni-740698/id_rsa Username:docker}
	I0929 13:28:54.323245 1465110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33631 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/newest-cni-740698/id_rsa Username:docker}
	I0929 13:28:54.326917 1465110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33631 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/newest-cni-740698/id_rsa Username:docker}
	I0929 13:28:54.391936 1465110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:28:54.415344 1465110 api_server.go:52] waiting for apiserver process to appear ...
	I0929 13:28:54.415421 1465110 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 13:28:54.433355 1465110 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 13:28:54.433382 1465110 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 13:28:54.440764 1465110 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:28:54.458141 1465110 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 13:28:54.458173 1465110 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 13:28:54.465310 1465110 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 13:28:54.467810 1465110 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 13:28:54.467831 1465110 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 13:28:54.497250 1465110 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 13:28:54.497286 1465110 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 13:28:54.497309 1465110 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 13:28:54.497326 1465110 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 13:28:54.529254 1465110 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 13:28:54.529283 1465110 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 13:28:54.532022 1465110 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0929 13:28:54.538659 1465110 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 13:28:54.538708 1465110 retry.go:31] will retry after 306.798742ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 13:28:54.563503 1465110 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 13:28:54.563535 1465110 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0929 13:28:54.589041 1465110 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 13:28:54.589068 1465110 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0929 13:28:54.617100 1465110 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 13:28:54.617132 1465110 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0929 13:28:54.647917 1465110 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 13:28:54.647952 1465110 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 13:28:54.678004 1465110 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 13:28:54.678030 1465110 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 13:28:54.698888 1465110 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:28:54.698915 1465110 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 13:28:54.717149 1465110 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:28:54.845675 1465110 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:28:54.916311 1465110 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 13:28:56.202046 1465110 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.736694687s)
	I0929 13:28:56.651351 1465110 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.119282964s)
	I0929 13:28:56.651401 1465110 addons.go:479] Verifying addon metrics-server=true in "newest-cni-740698"
	I0929 13:28:56.651456 1465110 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.934266797s)
	I0929 13:28:56.652822 1465110 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-740698 addons enable metrics-server
	
	I0929 13:28:56.694191 1465110 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.848477294s)
	I0929 13:28:56.694272 1465110 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.777927453s)
	I0929 13:28:56.694309 1465110 api_server.go:72] duration metric: took 2.464920673s to wait for apiserver process to appear ...
	I0929 13:28:56.694400 1465110 api_server.go:88] waiting for apiserver healthz status ...
	I0929 13:28:56.694433 1465110 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:28:56.695636 1465110 out.go:179] * Enabled addons: default-storageclass, metrics-server, dashboard, storage-provisioner
	I0929 13:28:56.696536 1465110 addons.go:514] duration metric: took 2.466970901s for enable addons: enabled=[default-storageclass metrics-server dashboard storage-provisioner]
	I0929 13:28:56.698554 1465110 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:28:56.698577 1465110 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:28:57.195196 1465110 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:28:57.201196 1465110 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:28:57.201230 1465110 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:28:57.694973 1465110 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:28:57.699509 1465110 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0929 13:28:57.700692 1465110 api_server.go:141] control plane version: v1.34.0
	I0929 13:28:57.700722 1465110 api_server.go:131] duration metric: took 1.006311167s to wait for apiserver health ...
	I0929 13:28:57.700734 1465110 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 13:28:57.704357 1465110 system_pods.go:59] 9 kube-system pods found
	I0929 13:28:57.704397 1465110 system_pods.go:61] "coredns-66bc5c9577-g22nn" [9ef181d0-e9e8-4118-be6a-82c8fc1b9262] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:28:57.704407 1465110 system_pods.go:61] "etcd-newest-cni-740698" [5b7ff3a3-7c62-4c27-a650-b0b16bb740cb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:28:57.704420 1465110 system_pods.go:61] "kindnet-r7p4j" [35989a73-e8b9-4fc4-a0b9-e95c31cc7a61] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0929 13:28:57.704426 1465110 system_pods.go:61] "kube-apiserver-newest-cni-740698" [046eaa92-bda2-4f34-b0b9-38f5ca2aee74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:28:57.704460 1465110 system_pods.go:61] "kube-controller-manager-newest-cni-740698" [cc43f598-9fe3-421f-82bb-bc03f0b6022a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:28:57.704470 1465110 system_pods.go:61] "kube-proxy-2csmd" [3abee784-525d-4f16-91f3-83a6f4b2a704] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 13:28:57.704476 1465110 system_pods.go:61] "kube-scheduler-newest-cni-740698" [6f9039a5-c310-4969-9b07-e6e854a43e38] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:28:57.704483 1465110 system_pods.go:61] "metrics-server-746fcd58dc-8n4ts" [5808bc7a-eba9-4a8a-b4f8-8a6218c7dc57] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 13:28:57.704487 1465110 system_pods.go:61] "storage-provisioner" [8b39193c-2d16-4945-bc4e-3b8931f63fff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 13:28:57.704507 1465110 system_pods.go:74] duration metric: took 3.765055ms to wait for pod list to return data ...
	I0929 13:28:57.704517 1465110 default_sa.go:34] waiting for default service account to be created ...
	I0929 13:28:57.706811 1465110 default_sa.go:45] found service account: "default"
	I0929 13:28:57.706831 1465110 default_sa.go:55] duration metric: took 2.307161ms for default service account to be created ...
	I0929 13:28:57.706843 1465110 kubeadm.go:578] duration metric: took 3.477454114s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0929 13:28:57.706858 1465110 node_conditions.go:102] verifying NodePressure condition ...
	I0929 13:28:57.709210 1465110 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 13:28:57.709235 1465110 node_conditions.go:123] node cpu capacity is 8
	I0929 13:28:57.709248 1465110 node_conditions.go:105] duration metric: took 2.385873ms to run NodePressure ...
	I0929 13:28:57.709259 1465110 start.go:241] waiting for startup goroutines ...
	I0929 13:28:57.709266 1465110 start.go:246] waiting for cluster config update ...
	I0929 13:28:57.709279 1465110 start.go:255] writing updated cluster config ...
	I0929 13:28:57.709560 1465110 ssh_runner.go:195] Run: rm -f paused
	I0929 13:28:57.759423 1465110 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 13:28:57.761826 1465110 out.go:179] * Done! kubectl is now configured to use "newest-cni-740698" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	cce75c93c5bec       523cad1a4df73       3 minutes ago       Exited              dashboard-metrics-scraper   6                   3d7279b20feea       dashboard-metrics-scraper-6ffb444bf9-84lcd
	c8f1618447982       6e38f40d628db       8 minutes ago       Running             storage-provisioner         2                   4f564be2e9c80       storage-provisioner
	0d00a3a87a9fb       409467f978b4a       9 minutes ago       Running             kindnet-cni                 1                   664c6881551ea       kindnet-mg2cv
	0b7191296275e       56cc512116c8f       9 minutes ago       Running             busybox                     1                   b37588f80b53b       busybox
	d9945b759e2bc       52546a367cc9e       9 minutes ago       Running             coredns                     1                   aae4effd190be       coredns-66bc5c9577-cw5kk
	f35afada4e61f       6e38f40d628db       9 minutes ago       Exited              storage-provisioner         1                   4f564be2e9c80       storage-provisioner
	48f6c459d0116       df0860106674d       9 minutes ago       Running             kube-proxy                  1                   6974bcf63981d       kube-proxy-pttl4
	4565d63f5f335       46169d968e920       9 minutes ago       Running             kube-scheduler              1                   63da49474e353       kube-scheduler-default-k8s-diff-port-625526
	6c1f8254a812f       90550c43ad2bc       9 minutes ago       Running             kube-apiserver              1                   effda1d63acfa       kube-apiserver-default-k8s-diff-port-625526
	85f6bdd750a32       a0af72f2ec6d6       9 minutes ago       Running             kube-controller-manager     1                   658ff23d6ca22       kube-controller-manager-default-k8s-diff-port-625526
	83fe60c28580b       5f1f5298c888d       9 minutes ago       Running             etcd                        1                   e63cfe797d86e       etcd-default-k8s-diff-port-625526
	0cbeffbe3ba08       56cc512116c8f       10 minutes ago      Exited              busybox                     0                   904c1099a0424       busybox
	52dd411a810c3       52546a367cc9e       10 minutes ago      Exited              coredns                     0                   1749fc5a880c8       coredns-66bc5c9577-cw5kk
	8730db506be53       409467f978b4a       10 minutes ago      Exited              kindnet-cni                 0                   279e889171f08       kindnet-mg2cv
	18033609a785b       df0860106674d       10 minutes ago      Exited              kube-proxy                  0                   995f690973bc5       kube-proxy-pttl4
	83c9e3b96e2a5       5f1f5298c888d       10 minutes ago      Exited              etcd                        0                   6af02ebc92fb2       etcd-default-k8s-diff-port-625526
	bc7b53b8e499d       46169d968e920       10 minutes ago      Exited              kube-scheduler              0                   2af6e8b010833       kube-scheduler-default-k8s-diff-port-625526
	54662278673f5       90550c43ad2bc       10 minutes ago      Exited              kube-apiserver              0                   ee3d207429f16       kube-apiserver-default-k8s-diff-port-625526
	8154dc34f513a       a0af72f2ec6d6       10 minutes ago      Exited              kube-controller-manager     0                   06d2a0d3b0b98       kube-controller-manager-default-k8s-diff-port-625526
	
	
	==> containerd <==
	Sep 29 13:25:48 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:25:48.844423919Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 29 13:25:52 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:25:52.793017918Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 29 13:25:52 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:25:52.794591546Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 13:25:53 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:25:53.447132254Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 13:25:55 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:25:55.296215017Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 13:25:55 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:25:55.296269432Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Sep 29 13:28:36 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:28:36.792898408Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 29 13:28:36 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:28:36.839655854Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Sep 29 13:28:36 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:28:36.840886814Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 13:28:36 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:28:36.840943646Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 29 13:28:37 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:28:37.794365684Z" level=info msg="CreateContainer within sandbox \"3d7279b20feea9ce4d218c39eaabdaac7e5fc7adac5492cea48113c78c8e896a\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,}"
	Sep 29 13:28:37 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:28:37.803901818Z" level=info msg="CreateContainer within sandbox \"3d7279b20feea9ce4d218c39eaabdaac7e5fc7adac5492cea48113c78c8e896a\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,} returns container id \"cce75c93c5bec92e3b23903fe8b7abad5a5ddc0763aa7ef3e9530d8a52589a77\""
	Sep 29 13:28:37 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:28:37.804517325Z" level=info msg="StartContainer for \"cce75c93c5bec92e3b23903fe8b7abad5a5ddc0763aa7ef3e9530d8a52589a77\""
	Sep 29 13:28:37 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:28:37.875215128Z" level=info msg="StartContainer for \"cce75c93c5bec92e3b23903fe8b7abad5a5ddc0763aa7ef3e9530d8a52589a77\" returns successfully"
	Sep 29 13:28:37 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:28:37.888483363Z" level=info msg="received exit event container_id:\"cce75c93c5bec92e3b23903fe8b7abad5a5ddc0763aa7ef3e9530d8a52589a77\"  id:\"cce75c93c5bec92e3b23903fe8b7abad5a5ddc0763aa7ef3e9530d8a52589a77\"  pid:2773  exit_status:1  exited_at:{seconds:1759152517  nanos:888140124}"
	Sep 29 13:28:37 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:28:37.915676429Z" level=info msg="shim disconnected" id=cce75c93c5bec92e3b23903fe8b7abad5a5ddc0763aa7ef3e9530d8a52589a77 namespace=k8s.io
	Sep 29 13:28:37 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:28:37.915714647Z" level=warning msg="cleaning up after shim disconnected" id=cce75c93c5bec92e3b23903fe8b7abad5a5ddc0763aa7ef3e9530d8a52589a77 namespace=k8s.io
	Sep 29 13:28:37 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:28:37.915726169Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 29 13:28:38 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:28:38.758109969Z" level=info msg="RemoveContainer for \"e3fd67af19f7af3c89dded0b1b66a0b2dca6e87656b9e903fe80baf618f0d5be\""
	Sep 29 13:28:38 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:28:38.762027749Z" level=info msg="RemoveContainer for \"e3fd67af19f7af3c89dded0b1b66a0b2dca6e87656b9e903fe80baf618f0d5be\" returns successfully"
	Sep 29 13:28:45 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:28:45.792935988Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 29 13:28:45 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:28:45.794488634Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 13:28:46 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:28:46.466243594Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 13:28:48 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:28:48.328730695Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 13:28:48 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:28:48.328778914Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	
	
	==> coredns [52dd411a810c3fa94118369009907067df080cbb16971d73b517e71e120a8c3e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55456 - 48772 "HINFO IN 5389282265455834246.5178903847650300258. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019543071s
	
	
	==> coredns [d9945b759e2bc56d4428e1d58ed3e13b62e93ce042d0d9472c533c69760f9ea6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58890 - 8999 "HINFO IN 1981937289594726429.3436502221074087452. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029862865s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-625526
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-625526
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=default-k8s-diff-port-625526
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T13_21_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 13:21:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-625526
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 13:32:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 13:30:24 +0000   Mon, 29 Sep 2025 13:21:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 13:30:24 +0000   Mon, 29 Sep 2025 13:21:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 13:30:24 +0000   Mon, 29 Sep 2025 13:21:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 13:30:24 +0000   Mon, 29 Sep 2025 13:21:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-625526
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 f2d034acb7334190866ba8b59be7bc8a
	  System UUID:                74d2e09c-06ad-43f6-ada3-bae8445e15be
	  Boot ID:                    c950b162-3ea4-4410-8c2e-1238f18b29b9
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-cw5kk                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-default-k8s-diff-port-625526                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-mg2cv                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-default-k8s-diff-port-625526             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-625526    200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-pttl4                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-default-k8s-diff-port-625526             100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-746fcd58dc-k2ghw                         100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         9m59s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-84lcd              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m33s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-djsjk                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 9m37s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node default-k8s-diff-port-625526 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node default-k8s-diff-port-625526 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node default-k8s-diff-port-625526 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node default-k8s-diff-port-625526 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node default-k8s-diff-port-625526 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node default-k8s-diff-port-625526 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node default-k8s-diff-port-625526 event: Registered Node default-k8s-diff-port-625526 in Controller
	  Normal  Starting                 9m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m41s (x8 over 9m41s)  kubelet          Node default-k8s-diff-port-625526 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m41s (x8 over 9m41s)  kubelet          Node default-k8s-diff-port-625526 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m41s (x7 over 9m41s)  kubelet          Node default-k8s-diff-port-625526 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m34s                  node-controller  Node default-k8s-diff-port-625526 event: Registered Node default-k8s-diff-port-625526 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 a1 f4 28 81 a8 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2e 2f bb 72 d0 bd 08 06
	[  +6.778142] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 83 71 a8 41 1d 08 06
	[  +0.096747] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 43 49 e5 fd fa 08 06
	[Sep29 13:07] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 2d 17 7b b6 88 08 06
	[  +0.000371] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5a 43 49 e5 fd fa 08 06
	[ +37.870699] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 36 61 5e 36 d0 11 08 06
	[Sep29 13:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 3c ea 5f b8 68 08 06
	[  +0.009082] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 a0 7d 1d f4 ea 08 06
	[ +10.861380] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 60 01 bb bd e5 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 61 5e 36 d0 11 08 06
	[ +36.402844] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 73 32 f4 f1 e6 08 06
	[  +0.000316] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 3c ea 5f b8 68 08 06
	
	
	==> etcd [83c9e3b96e2a5728a639dfbb2fdbc7cf856add21b860c97b648938e6beba8b60] <==
	{"level":"warn","ts":"2025-09-29T13:21:47.055260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.062932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.070313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.079337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.086881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.095041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.102550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.109601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.119177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.125410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.132986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.147388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.155452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.163707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.171159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.178637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.187235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.199989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.208577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.220366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.226749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.242255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.248843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.255651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.307112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50364","server-name":"","error":"EOF"}
	
	
	==> etcd [83fe60c28580bedd1ece217daf26964b49960bf03577707087864f854652b995] <==
	{"level":"warn","ts":"2025-09-29T13:22:43.705859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.718205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.725151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.732557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.739954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.746803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.754076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.761469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.768784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.783322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.789585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.803254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.810487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.817796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.824744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.831734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.838566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.844926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.851606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.866773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.873171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.879805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.928828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45332","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T13:22:54.981539Z","caller":"traceutil/trace.go:172","msg":"trace[1173468909] transaction","detail":"{read_only:false; response_revision:679; number_of_response:1; }","duration":"100.964185ms","start":"2025-09-29T13:22:54.880553Z","end":"2025-09-29T13:22:54.981517Z","steps":["trace[1173468909] 'process raft request'  (duration: 100.847985ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T13:28:23.516567Z","caller":"traceutil/trace.go:172","msg":"trace[1788374265] transaction","detail":"{read_only:false; response_revision:1107; number_of_response:1; }","duration":"220.18857ms","start":"2025-09-29T13:28:23.296362Z","end":"2025-09-29T13:28:23.516551Z","steps":["trace[1788374265] 'process raft request'  (duration: 220.036393ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:32:22 up  6:14,  0 users,  load average: 0.11, 0.46, 0.93
	Linux default-k8s-diff-port-625526 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [0d00a3a87a9fb58a74b077f501c5e6d682875dffbb59dfa4714a04ec2f5cae3d] <==
	I0929 13:30:16.018438       1 main.go:301] handling current node
	I0929 13:30:26.019062       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:30:26.019121       1 main.go:301] handling current node
	I0929 13:30:36.024455       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:30:36.024487       1 main.go:301] handling current node
	I0929 13:30:46.017524       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:30:46.017571       1 main.go:301] handling current node
	I0929 13:30:56.018752       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:30:56.018789       1 main.go:301] handling current node
	I0929 13:31:06.018109       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:31:06.018157       1 main.go:301] handling current node
	I0929 13:31:16.026289       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:31:16.026326       1 main.go:301] handling current node
	I0929 13:31:26.018349       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:31:26.018396       1 main.go:301] handling current node
	I0929 13:31:36.020812       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:31:36.020852       1 main.go:301] handling current node
	I0929 13:31:46.019577       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:31:46.019641       1 main.go:301] handling current node
	I0929 13:31:56.018313       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:31:56.018354       1 main.go:301] handling current node
	I0929 13:32:06.026430       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:32:06.026466       1 main.go:301] handling current node
	I0929 13:32:16.018920       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:32:16.018984       1 main.go:301] handling current node
	
	
	==> kindnet [8730db506be53d603d6e5354998b77fdfd5825608b87a598d9e5040b46cbeab7] <==
	I0929 13:21:56.825783       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0929 13:21:56.826084       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0929 13:21:56.826271       1 main.go:148] setting mtu 1500 for CNI 
	I0929 13:21:56.826290       1 main.go:178] kindnetd IP family: "ipv4"
	I0929 13:21:56.826319       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-29T13:21:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0929 13:21:57.090617       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0929 13:21:57.090651       1 controller.go:381] "Waiting for informer caches to sync"
	I0929 13:21:57.090671       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0929 13:21:57.090829       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0929 13:21:57.491111       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0929 13:21:57.491144       1 metrics.go:72] Registering metrics
	I0929 13:21:57.491243       1 controller.go:711] "Syncing nftables rules"
	I0929 13:22:07.091042       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:22:07.091134       1 main.go:301] handling current node
	I0929 13:22:17.090893       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:22:17.090953       1 main.go:301] handling current node
	
	
	==> kube-apiserver [54662278673f59b001060e69dfff2d0a1b8da29b92acfc75fd75584fc542ad3f] <==
	I0929 13:21:50.371949       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0929 13:21:50.379088       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0929 13:21:55.237698       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0929 13:21:55.487382       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 13:21:55.490779       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 13:21:55.635801       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E0929 13:22:22.560626       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:51802: use of closed network connection
	I0929 13:22:23.232199       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0929 13:22:23.236627       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:22:23.236685       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0929 13:22:23.236733       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0929 13:22:23.293841       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.102.35.208"}
	W0929 13:22:23.299083       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:22:23.299141       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0929 13:22:23.303375       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:22:23.303429       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-apiserver [6c1f8254a812f1d28c95fdcc3487caf26368c4bcbe20d30b1f4d3694871a3866] <==
	I0929 13:28:35.852594       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 13:28:45.325025       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:28:45.325065       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 13:28:45.325079       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 13:28:45.326125       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:28:45.326214       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 13:28:45.326230       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:29:47.149439       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:29:50.715951       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 13:30:45.325191       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:30:45.325244       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 13:30:45.325258       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 13:30:45.327319       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:30:45.327405       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 13:30:45.327421       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:31:01.987413       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:31:11.998189       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:32:06.207726       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:32:12.791672       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [8154dc34f513ae97cc439160f222da92653f764331ca537a3530fddeaf8f1933] <==
	I0929 13:21:54.802733       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 13:21:54.812007       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 13:21:54.832159       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 13:21:54.832173       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 13:21:54.832178       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 13:21:54.833081       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0929 13:21:54.833209       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0929 13:21:54.833326       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 13:21:54.833334       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0929 13:21:54.834325       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 13:21:54.834332       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 13:21:54.834364       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 13:21:54.834416       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0929 13:21:54.834419       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0929 13:21:54.834526       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 13:21:54.834547       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0929 13:21:54.834595       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 13:21:54.835374       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 13:21:54.835474       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 13:21:54.835504       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0929 13:21:54.838935       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0929 13:21:54.838987       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 13:21:54.839013       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 13:21:54.846377       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 13:21:54.858888       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [85f6bdd750a3279ed665d7d1e07d798cf712246f8c692be7738b3d26f25eb460] <==
	I0929 13:26:18.813957       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:26:48.764139       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:26:48.820729       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:27:18.768597       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:27:18.827711       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:27:48.773143       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:27:48.834408       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:28:18.777583       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:28:18.841571       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:28:48.782341       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:28:48.849482       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:29:18.786027       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:29:18.856444       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:29:48.790296       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:29:48.863492       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:30:18.794201       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:30:18.870389       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:30:48.798329       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:30:48.877387       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:31:18.802616       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:31:18.884733       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:31:48.807066       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:31:48.891326       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:32:18.811127       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:32:18.898340       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [18033609a785b3ef0f90fccb847276575ac648cbaec8ca8696ccd7a559d0ec57] <==
	I0929 13:21:56.177173       1 server_linux.go:53] "Using iptables proxy"
	I0929 13:21:56.235812       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 13:21:56.336728       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 13:21:56.336775       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0929 13:21:56.336859       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 13:21:56.362194       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 13:21:56.362272       1 server_linux.go:132] "Using iptables Proxier"
	I0929 13:21:56.368906       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 13:21:56.369727       1 server.go:527] "Version info" version="v1.34.0"
	I0929 13:21:56.369750       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:21:56.372113       1 config.go:200] "Starting service config controller"
	I0929 13:21:56.372135       1 config.go:106] "Starting endpoint slice config controller"
	I0929 13:21:56.372142       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 13:21:56.372165       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 13:21:56.372277       1 config.go:309] "Starting node config controller"
	I0929 13:21:56.372287       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 13:21:56.372568       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 13:21:56.372578       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 13:21:56.472818       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 13:21:56.472869       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 13:21:56.472827       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 13:21:56.472869       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [48f6c459d01167e4cc1acd43c7203d2da52af019cff8c87311cfcd9180c70900] <==
	I0929 13:22:45.363460       1 server_linux.go:53] "Using iptables proxy"
	I0929 13:22:45.425201       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 13:22:45.526403       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 13:22:45.526444       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0929 13:22:45.526559       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 13:22:45.552317       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 13:22:45.552391       1 server_linux.go:132] "Using iptables Proxier"
	I0929 13:22:45.557750       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 13:22:45.558227       1 server.go:527] "Version info" version="v1.34.0"
	I0929 13:22:45.558266       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:22:45.559909       1 config.go:200] "Starting service config controller"
	I0929 13:22:45.559989       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 13:22:45.560011       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 13:22:45.560042       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 13:22:45.560073       1 config.go:309] "Starting node config controller"
	I0929 13:22:45.560084       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 13:22:45.560091       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 13:22:45.560099       1 config.go:106] "Starting endpoint slice config controller"
	I0929 13:22:45.560106       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 13:22:45.660202       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 13:22:45.660197       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 13:22:45.660226       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [4565d63f5f3359ed34daa172ad16f71853ccb0d8e3e878f6804de1e2cf087c65] <==
	I0929 13:22:43.054547       1 serving.go:386] Generated self-signed cert in-memory
	W0929 13:22:44.319693       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 13:22:44.319732       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0929 13:22:44.319744       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 13:22:44.319753       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 13:22:44.345155       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 13:22:44.345188       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:22:44.348255       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:22:44.348296       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:22:44.348680       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 13:22:44.348957       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 13:22:44.448456       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [bc7b53b8e499d75c0a104765688d458b2210e7543d2d10f7764511a09984e08f] <==
	E0929 13:21:47.802671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 13:21:47.802807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 13:21:47.802535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 13:21:47.803161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 13:21:47.803158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 13:21:47.803181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 13:21:47.803303       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 13:21:47.803345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 13:21:47.803485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 13:21:47.803532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 13:21:47.803536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 13:21:47.803597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 13:21:47.803716       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 13:21:48.682512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 13:21:48.762134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 13:21:48.788339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 13:21:48.790353       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 13:21:48.796360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 13:21:48.813890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 13:21:48.840383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0929 13:21:48.858666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 13:21:48.961133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 13:21:48.977183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 13:21:48.991348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I0929 13:21:51.799835       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 13:31:04 default-k8s-diff-port-625526 kubelet[609]: E0929 13:31:04.792061     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-84lcd_kubernetes-dashboard(daf441ed-0e6e-4b4d-982d-9d5c9a9a9f3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84lcd" podUID="daf441ed-0e6e-4b4d-982d-9d5c9a9a9f3a"
	Sep 29 13:31:13 default-k8s-diff-port-625526 kubelet[609]: E0929 13:31:13.792118     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-djsjk" podUID="23e361a9-ad69-4d5f-a704-eac5d4a77060"
	Sep 29 13:31:14 default-k8s-diff-port-625526 kubelet[609]: E0929 13:31:14.792664     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-k2ghw" podUID="c11a1fa7-c21f-47af-980f-7b1b08f6cf57"
	Sep 29 13:31:17 default-k8s-diff-port-625526 kubelet[609]: I0929 13:31:17.791313     609 scope.go:117] "RemoveContainer" containerID="cce75c93c5bec92e3b23903fe8b7abad5a5ddc0763aa7ef3e9530d8a52589a77"
	Sep 29 13:31:17 default-k8s-diff-port-625526 kubelet[609]: E0929 13:31:17.791645     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-84lcd_kubernetes-dashboard(daf441ed-0e6e-4b4d-982d-9d5c9a9a9f3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84lcd" podUID="daf441ed-0e6e-4b4d-982d-9d5c9a9a9f3a"
	Sep 29 13:31:24 default-k8s-diff-port-625526 kubelet[609]: E0929 13:31:24.792299     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-djsjk" podUID="23e361a9-ad69-4d5f-a704-eac5d4a77060"
	Sep 29 13:31:27 default-k8s-diff-port-625526 kubelet[609]: E0929 13:31:27.792345     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-k2ghw" podUID="c11a1fa7-c21f-47af-980f-7b1b08f6cf57"
	Sep 29 13:31:29 default-k8s-diff-port-625526 kubelet[609]: I0929 13:31:29.792059     609 scope.go:117] "RemoveContainer" containerID="cce75c93c5bec92e3b23903fe8b7abad5a5ddc0763aa7ef3e9530d8a52589a77"
	Sep 29 13:31:29 default-k8s-diff-port-625526 kubelet[609]: E0929 13:31:29.792271     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-84lcd_kubernetes-dashboard(daf441ed-0e6e-4b4d-982d-9d5c9a9a9f3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84lcd" podUID="daf441ed-0e6e-4b4d-982d-9d5c9a9a9f3a"
	Sep 29 13:31:37 default-k8s-diff-port-625526 kubelet[609]: E0929 13:31:37.793153     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-djsjk" podUID="23e361a9-ad69-4d5f-a704-eac5d4a77060"
	Sep 29 13:31:41 default-k8s-diff-port-625526 kubelet[609]: I0929 13:31:41.792240     609 scope.go:117] "RemoveContainer" containerID="cce75c93c5bec92e3b23903fe8b7abad5a5ddc0763aa7ef3e9530d8a52589a77"
	Sep 29 13:31:41 default-k8s-diff-port-625526 kubelet[609]: E0929 13:31:41.792449     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-84lcd_kubernetes-dashboard(daf441ed-0e6e-4b4d-982d-9d5c9a9a9f3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84lcd" podUID="daf441ed-0e6e-4b4d-982d-9d5c9a9a9f3a"
	Sep 29 13:31:41 default-k8s-diff-port-625526 kubelet[609]: E0929 13:31:41.793229     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-k2ghw" podUID="c11a1fa7-c21f-47af-980f-7b1b08f6cf57"
	Sep 29 13:31:49 default-k8s-diff-port-625526 kubelet[609]: E0929 13:31:49.792608     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-djsjk" podUID="23e361a9-ad69-4d5f-a704-eac5d4a77060"
	Sep 29 13:31:52 default-k8s-diff-port-625526 kubelet[609]: I0929 13:31:52.791879     609 scope.go:117] "RemoveContainer" containerID="cce75c93c5bec92e3b23903fe8b7abad5a5ddc0763aa7ef3e9530d8a52589a77"
	Sep 29 13:31:52 default-k8s-diff-port-625526 kubelet[609]: E0929 13:31:52.792097     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-84lcd_kubernetes-dashboard(daf441ed-0e6e-4b4d-982d-9d5c9a9a9f3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84lcd" podUID="daf441ed-0e6e-4b4d-982d-9d5c9a9a9f3a"
	Sep 29 13:31:53 default-k8s-diff-port-625526 kubelet[609]: E0929 13:31:53.792524     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-k2ghw" podUID="c11a1fa7-c21f-47af-980f-7b1b08f6cf57"
	Sep 29 13:32:00 default-k8s-diff-port-625526 kubelet[609]: E0929 13:32:00.792242     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-djsjk" podUID="23e361a9-ad69-4d5f-a704-eac5d4a77060"
	Sep 29 13:32:03 default-k8s-diff-port-625526 kubelet[609]: I0929 13:32:03.791492     609 scope.go:117] "RemoveContainer" containerID="cce75c93c5bec92e3b23903fe8b7abad5a5ddc0763aa7ef3e9530d8a52589a77"
	Sep 29 13:32:03 default-k8s-diff-port-625526 kubelet[609]: E0929 13:32:03.791669     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-84lcd_kubernetes-dashboard(daf441ed-0e6e-4b4d-982d-9d5c9a9a9f3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84lcd" podUID="daf441ed-0e6e-4b4d-982d-9d5c9a9a9f3a"
	Sep 29 13:32:05 default-k8s-diff-port-625526 kubelet[609]: E0929 13:32:05.792454     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-k2ghw" podUID="c11a1fa7-c21f-47af-980f-7b1b08f6cf57"
	Sep 29 13:32:11 default-k8s-diff-port-625526 kubelet[609]: E0929 13:32:11.794785     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-djsjk" podUID="23e361a9-ad69-4d5f-a704-eac5d4a77060"
	Sep 29 13:32:15 default-k8s-diff-port-625526 kubelet[609]: I0929 13:32:15.791393     609 scope.go:117] "RemoveContainer" containerID="cce75c93c5bec92e3b23903fe8b7abad5a5ddc0763aa7ef3e9530d8a52589a77"
	Sep 29 13:32:15 default-k8s-diff-port-625526 kubelet[609]: E0929 13:32:15.791670     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-84lcd_kubernetes-dashboard(daf441ed-0e6e-4b4d-982d-9d5c9a9a9f3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84lcd" podUID="daf441ed-0e6e-4b4d-982d-9d5c9a9a9f3a"
	Sep 29 13:32:16 default-k8s-diff-port-625526 kubelet[609]: E0929 13:32:16.792830     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-k2ghw" podUID="c11a1fa7-c21f-47af-980f-7b1b08f6cf57"
	
	
	==> storage-provisioner [c8f16184479829556ddc7db6b506d293b04a05ec25401f6eecaad912ee8bbfc6] <==
	W0929 13:31:58.324369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:00.328381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:00.332593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:02.335537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:02.339381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:04.342479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:04.347375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:06.350736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:06.354526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:08.358118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:08.362174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:10.366135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:10.370217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:12.373719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:12.378702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:14.382623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:14.386716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:16.390097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:16.395216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:18.398364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:18.403993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:20.407318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:20.411537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:22.414739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:32:22.419663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f35afada4e61fdd48d693176281536f8e2f82890bff5b998c13d62cb304dd982] <==
	I0929 13:22:45.332798       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 13:23:15.336894       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-625526 -n default-k8s-diff-port-625526
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-625526 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-k2ghw kubernetes-dashboard-855c9754f9-djsjk
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-625526 describe pod metrics-server-746fcd58dc-k2ghw kubernetes-dashboard-855c9754f9-djsjk
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-625526 describe pod metrics-server-746fcd58dc-k2ghw kubernetes-dashboard-855c9754f9-djsjk: exit status 1 (60.04521ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-k2ghw" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-djsjk" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-625526 describe pod metrics-server-746fcd58dc-k2ghw kubernetes-dashboard-855c9754f9-djsjk: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-djsjk" [23e361a9-ad69-4d5f-a704-eac5d4a77060] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0929 13:32:57.581042 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:32:59.086690 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:33:40.996187 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/bridge-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:33:59.139253 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/old-k8s-version-495121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:34:26.842541 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/old-k8s-version-495121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:35:13.722022 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:35:20.683594 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:35:22.781082 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:35:28.892273 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/auto-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:35:39.707511 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:35:41.422424 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:35:58.708219 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/kindnet-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:36:43.751619 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/custom-flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:37:08.286734 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/enable-default-cni-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:37:59.086587 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:38:40.996099 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/bridge-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:38:59.139361 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/old-k8s-version-495121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:40:13.722251 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/no-preload-554589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:40:20.683667 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:40:28.892584 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/auto-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:40:39.707994 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:40:58.708513 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/kindnet-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-625526 -n default-k8s-diff-port-625526
start_stop_delete_test.go:285: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-29 13:41:24.327203107 +0000 UTC m=+5048.911226294
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-625526 describe po kubernetes-dashboard-855c9754f9-djsjk -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context default-k8s-diff-port-625526 describe po kubernetes-dashboard-855c9754f9-djsjk -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-djsjk
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             default-k8s-diff-port-625526/192.168.76.2
Start Time:       Mon, 29 Sep 2025 13:22:49 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9dpcp (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-9dpcp:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-djsjk to default-k8s-diff-port-625526
Normal   Pulling    15m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     15m (x5 over 18m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     15m (x5 over 18m)     kubelet            Error: ErrImagePull
Normal   BackOff    3m26s (x63 over 18m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     3m26s (x63 over 18m)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-625526 logs kubernetes-dashboard-855c9754f9-djsjk -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-625526 logs kubernetes-dashboard-855c9754f9-djsjk -n kubernetes-dashboard: exit status 1 (73.640032ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-djsjk" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context default-k8s-diff-port-625526 logs kubernetes-dashboard-855c9754f9-djsjk -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-625526 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-625526
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-625526:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f29ba4b4f0d0ad7bb726ccd78a921114fa0e02f8993408c4c070cf20b6077fdd",
	        "Created": "2025-09-29T13:21:35.535015835Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1452163,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T13:22:35.790994729Z",
	            "FinishedAt": "2025-09-29T13:22:34.971046468Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/f29ba4b4f0d0ad7bb726ccd78a921114fa0e02f8993408c4c070cf20b6077fdd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f29ba4b4f0d0ad7bb726ccd78a921114fa0e02f8993408c4c070cf20b6077fdd/hostname",
	        "HostsPath": "/var/lib/docker/containers/f29ba4b4f0d0ad7bb726ccd78a921114fa0e02f8993408c4c070cf20b6077fdd/hosts",
	        "LogPath": "/var/lib/docker/containers/f29ba4b4f0d0ad7bb726ccd78a921114fa0e02f8993408c4c070cf20b6077fdd/f29ba4b4f0d0ad7bb726ccd78a921114fa0e02f8993408c4c070cf20b6077fdd-json.log",
	        "Name": "/default-k8s-diff-port-625526",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-625526:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-625526",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f29ba4b4f0d0ad7bb726ccd78a921114fa0e02f8993408c4c070cf20b6077fdd",
	                "LowerDir": "/var/lib/docker/overlay2/0b283983875cfbfb907bf7aa11f0491151097b490089be747dc9b0f850143a32-init/diff:/var/lib/docker/overlay2/fbd0ff8837aea1062458ef3b6c2ff01f7caaf77470820d108a1f7ca188c98aa7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0b283983875cfbfb907bf7aa11f0491151097b490089be747dc9b0f850143a32/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0b283983875cfbfb907bf7aa11f0491151097b490089be747dc9b0f850143a32/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0b283983875cfbfb907bf7aa11f0491151097b490089be747dc9b0f850143a32/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-625526",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-625526/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-625526",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-625526",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-625526",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2fc799ac60e7124fe1e2078773ec3fd609a42a5411bf191b1342269da1d9aad0",
	            "SandboxKey": "/var/run/docker/netns/2fc799ac60e7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33621"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33622"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33625"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33623"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33624"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-625526": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:0f:c7:b6:37:f4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "95f7e8c854149b1628032e2a72d3bec2183e183d410a56fe3a422f2b1aab16f1",
	                    "EndpointID": "5065cae664fda66f29808766dc7549c83c8f90c3d1fde40cf38782045bed9c4c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-625526",
	                        "f29ba4b4f0d0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-625526 -n default-k8s-diff-port-625526
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-625526 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-625526 logs -n 25: (1.525605559s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ old-k8s-version-495121 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-495121 │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ pause   │ -p old-k8s-version-495121 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-495121 │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ unpause │ -p old-k8s-version-495121 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-495121 │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ delete  │ -p old-k8s-version-495121                                                                                                                                                                                                                           │ old-k8s-version-495121 │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ delete  │ -p old-k8s-version-495121                                                                                                                                                                                                                           │ old-k8s-version-495121 │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ start   │ -p newest-cni-740698 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0 │ newest-cni-740698      │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ image   │ embed-certs-644246 image list --format=json                                                                                                                                                                                                         │ embed-certs-644246     │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ pause   │ -p embed-certs-644246 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-644246     │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ unpause │ -p embed-certs-644246 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-644246     │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ delete  │ -p embed-certs-644246                                                                                                                                                                                                                               │ embed-certs-644246     │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ delete  │ -p embed-certs-644246                                                                                                                                                                                                                               │ embed-certs-644246     │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ addons  │ enable metrics-server -p newest-cni-740698 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-740698      │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ stop    │ -p newest-cni-740698 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-740698      │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ addons  │ enable dashboard -p newest-cni-740698 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-740698      │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ start   │ -p newest-cni-740698 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0 │ newest-cni-740698      │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ image   │ newest-cni-740698 image list --format=json                                                                                                                                                                                                          │ newest-cni-740698      │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ pause   │ -p newest-cni-740698 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-740698      │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:28 UTC │
	│ unpause │ -p newest-cni-740698 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-740698      │ jenkins │ v1.37.0 │ 29 Sep 25 13:28 UTC │ 29 Sep 25 13:29 UTC │
	│ delete  │ -p newest-cni-740698                                                                                                                                                                                                                                │ newest-cni-740698      │ jenkins │ v1.37.0 │ 29 Sep 25 13:29 UTC │ 29 Sep 25 13:29 UTC │
	│ delete  │ -p newest-cni-740698                                                                                                                                                                                                                                │ newest-cni-740698      │ jenkins │ v1.37.0 │ 29 Sep 25 13:29 UTC │ 29 Sep 25 13:29 UTC │
	│ image   │ no-preload-554589 image list --format=json                                                                                                                                                                                                          │ no-preload-554589      │ jenkins │ v1.37.0 │ 29 Sep 25 13:29 UTC │ 29 Sep 25 13:29 UTC │
	│ pause   │ -p no-preload-554589 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-554589      │ jenkins │ v1.37.0 │ 29 Sep 25 13:29 UTC │ 29 Sep 25 13:29 UTC │
	│ unpause │ -p no-preload-554589 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-554589      │ jenkins │ v1.37.0 │ 29 Sep 25 13:29 UTC │ 29 Sep 25 13:29 UTC │
	│ delete  │ -p no-preload-554589                                                                                                                                                                                                                                │ no-preload-554589      │ jenkins │ v1.37.0 │ 29 Sep 25 13:29 UTC │ 29 Sep 25 13:29 UTC │
	│ delete  │ -p no-preload-554589                                                                                                                                                                                                                                │ no-preload-554589      │ jenkins │ v1.37.0 │ 29 Sep 25 13:29 UTC │ 29 Sep 25 13:29 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 13:28:47
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 13:28:47.629010 1465110 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:28:47.629105 1465110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:28:47.629112 1465110 out.go:374] Setting ErrFile to fd 2...
	I0929 13:28:47.629116 1465110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:28:47.629362 1465110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1097891/.minikube/bin
	I0929 13:28:47.629827 1465110 out.go:368] Setting JSON to false
	I0929 13:28:47.631050 1465110 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":22265,"bootTime":1759130263,"procs":282,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 13:28:47.631151 1465110 start.go:140] virtualization: kvm guest
	I0929 13:28:47.632789 1465110 out.go:179] * [newest-cni-740698] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 13:28:47.633869 1465110 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 13:28:47.633868 1465110 notify.go:220] Checking for updates...
	I0929 13:28:47.635694 1465110 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 13:28:47.636760 1465110 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 13:28:47.637754 1465110 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1097891/.minikube
	I0929 13:28:47.638765 1465110 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 13:28:47.639953 1465110 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 13:28:47.641486 1465110 config.go:182] Loaded profile config "newest-cni-740698": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 13:28:47.642019 1465110 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 13:28:47.665832 1465110 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 13:28:47.665953 1465110 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:28:47.723794 1465110 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-29 13:28:47.714011734 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:28:47.723910 1465110 docker.go:318] overlay module found
	I0929 13:28:47.725515 1465110 out.go:179] * Using the docker driver based on existing profile
	I0929 13:28:47.726501 1465110 start.go:304] selected driver: docker
	I0929 13:28:47.726514 1465110 start.go:924] validating driver "docker" against &{Name:newest-cni-740698 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-740698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:28:47.726592 1465110 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 13:28:47.727137 1465110 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:28:47.786604 1465110 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-29 13:28:47.775360933 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:28:47.786935 1465110 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0929 13:28:47.786994 1465110 cni.go:84] Creating CNI manager for ""
	I0929 13:28:47.787058 1465110 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0929 13:28:47.787138 1465110 start.go:348] cluster config:
	{Name:newest-cni-740698 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-740698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:28:47.788845 1465110 out.go:179] * Starting "newest-cni-740698" primary control-plane node in "newest-cni-740698" cluster
	I0929 13:28:47.789698 1465110 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0929 13:28:47.790619 1465110 out.go:179] * Pulling base image v0.0.48 ...
	I0929 13:28:47.791466 1465110 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0929 13:28:47.791515 1465110 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0929 13:28:47.791538 1465110 cache.go:58] Caching tarball of preloaded images
	I0929 13:28:47.791581 1465110 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 13:28:47.791656 1465110 preload.go:172] Found /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0929 13:28:47.791668 1465110 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
	I0929 13:28:47.791790 1465110 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698/config.json ...
	I0929 13:28:47.814442 1465110 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 13:28:47.814461 1465110 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 13:28:47.814477 1465110 cache.go:232] Successfully downloaded all kic artifacts
	I0929 13:28:47.814502 1465110 start.go:360] acquireMachinesLock for newest-cni-740698: {Name:mkf40a81be102ef43d2455f2435b32c6c1c894a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:28:47.814571 1465110 start.go:364] duration metric: took 41.549µs to acquireMachinesLock for "newest-cni-740698"
	I0929 13:28:47.814589 1465110 start.go:96] Skipping create...Using existing machine configuration
	I0929 13:28:47.814597 1465110 fix.go:54] fixHost starting: 
	I0929 13:28:47.814799 1465110 cli_runner.go:164] Run: docker container inspect newest-cni-740698 --format={{.State.Status}}
	I0929 13:28:47.833656 1465110 fix.go:112] recreateIfNeeded on newest-cni-740698: state=Stopped err=<nil>
	W0929 13:28:47.833696 1465110 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 13:28:47.834947 1465110 out.go:252] * Restarting existing docker container for "newest-cni-740698" ...
	I0929 13:28:47.835039 1465110 cli_runner.go:164] Run: docker start newest-cni-740698
	I0929 13:28:48.076859 1465110 cli_runner.go:164] Run: docker container inspect newest-cni-740698 --format={{.State.Status}}
	I0929 13:28:48.095440 1465110 kic.go:430] container "newest-cni-740698" state is running.
	I0929 13:28:48.095808 1465110 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-740698
	I0929 13:28:48.114058 1465110 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698/config.json ...
	I0929 13:28:48.114307 1465110 machine.go:93] provisionDockerMachine start ...
	I0929 13:28:48.114405 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:48.133581 1465110 main.go:141] libmachine: Using SSH client type: native
	I0929 13:28:48.133843 1465110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33631 <nil> <nil>}
	I0929 13:28:48.133858 1465110 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 13:28:48.134500 1465110 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46200->127.0.0.1:33631: read: connection reset by peer
	I0929 13:28:51.270952 1465110 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-740698
	
	I0929 13:28:51.270992 1465110 ubuntu.go:182] provisioning hostname "newest-cni-740698"
	I0929 13:28:51.271069 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:51.289296 1465110 main.go:141] libmachine: Using SSH client type: native
	I0929 13:28:51.289545 1465110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33631 <nil> <nil>}
	I0929 13:28:51.289560 1465110 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-740698 && echo "newest-cni-740698" | sudo tee /etc/hostname
	I0929 13:28:51.438761 1465110 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-740698
	
	I0929 13:28:51.438840 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:51.457877 1465110 main.go:141] libmachine: Using SSH client type: native
	I0929 13:28:51.458135 1465110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33631 <nil> <nil>}
	I0929 13:28:51.458154 1465110 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-740698' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-740698/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-740698' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 13:28:51.593410 1465110 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 13:28:51.593449 1465110 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-1097891/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-1097891/.minikube}
	I0929 13:28:51.593481 1465110 ubuntu.go:190] setting up certificates
	I0929 13:28:51.593495 1465110 provision.go:84] configureAuth start
	I0929 13:28:51.593550 1465110 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-740698
	I0929 13:28:51.611525 1465110 provision.go:143] copyHostCerts
	I0929 13:28:51.611591 1465110 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.pem, removing ...
	I0929 13:28:51.611615 1465110 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.pem
	I0929 13:28:51.611700 1465110 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.pem (1078 bytes)
	I0929 13:28:51.611825 1465110 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1097891/.minikube/cert.pem, removing ...
	I0929 13:28:51.611837 1465110 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1097891/.minikube/cert.pem
	I0929 13:28:51.611881 1465110 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-1097891/.minikube/cert.pem (1123 bytes)
	I0929 13:28:51.611991 1465110 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1097891/.minikube/key.pem, removing ...
	I0929 13:28:51.612001 1465110 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1097891/.minikube/key.pem
	I0929 13:28:51.612053 1465110 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-1097891/.minikube/key.pem (1679 bytes)
	I0929 13:28:51.612145 1465110 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca-key.pem org=jenkins.newest-cni-740698 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-740698]
	I0929 13:28:51.883873 1465110 provision.go:177] copyRemoteCerts
	I0929 13:28:51.883933 1465110 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 13:28:51.883991 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:51.903374 1465110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33631 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/newest-cni-740698/id_rsa Username:docker}
	I0929 13:28:52.001398 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 13:28:52.027859 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 13:28:52.052634 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0929 13:28:52.076437 1465110 provision.go:87] duration metric: took 482.92934ms to configureAuth
	I0929 13:28:52.076472 1465110 ubuntu.go:206] setting minikube options for container-runtime
	I0929 13:28:52.076652 1465110 config.go:182] Loaded profile config "newest-cni-740698": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 13:28:52.076664 1465110 machine.go:96] duration metric: took 3.962343403s to provisionDockerMachine
	I0929 13:28:52.076673 1465110 start.go:293] postStartSetup for "newest-cni-740698" (driver="docker")
	I0929 13:28:52.076684 1465110 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 13:28:52.076733 1465110 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 13:28:52.076772 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:52.094150 1465110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33631 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/newest-cni-740698/id_rsa Username:docker}
	I0929 13:28:52.191088 1465110 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 13:28:52.194641 1465110 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 13:28:52.194668 1465110 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 13:28:52.194676 1465110 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 13:28:52.194684 1465110 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 13:28:52.194695 1465110 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1097891/.minikube/addons for local assets ...
	I0929 13:28:52.194737 1465110 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1097891/.minikube/files for local assets ...
	I0929 13:28:52.194818 1465110 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem -> 11014942.pem in /etc/ssl/certs
	I0929 13:28:52.194917 1465110 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 13:28:52.204323 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem --> /etc/ssl/certs/11014942.pem (1708 bytes)
	I0929 13:28:52.230005 1465110 start.go:296] duration metric: took 153.302822ms for postStartSetup
	I0929 13:28:52.230084 1465110 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 13:28:52.230135 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:52.248054 1465110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33631 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/newest-cni-740698/id_rsa Username:docker}
	I0929 13:28:52.342555 1465110 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 13:28:52.347137 1465110 fix.go:56] duration metric: took 4.532532077s for fixHost
	I0929 13:28:52.347165 1465110 start.go:83] releasing machines lock for "newest-cni-740698", held for 4.532582488s
	I0929 13:28:52.347237 1465110 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-740698
	I0929 13:28:52.364912 1465110 ssh_runner.go:195] Run: cat /version.json
	I0929 13:28:52.364957 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:52.365051 1465110 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 13:28:52.365121 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:52.382974 1465110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33631 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/newest-cni-740698/id_rsa Username:docker}
	I0929 13:28:52.383162 1465110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33631 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/newest-cni-740698/id_rsa Username:docker}
	I0929 13:28:52.554416 1465110 ssh_runner.go:195] Run: systemctl --version
	I0929 13:28:52.559399 1465110 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 13:28:52.563991 1465110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 13:28:52.583272 1465110 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 13:28:52.583349 1465110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 13:28:52.592814 1465110 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 13:28:52.592837 1465110 start.go:495] detecting cgroup driver to use...
	I0929 13:28:52.592867 1465110 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 13:28:52.592905 1465110 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0929 13:28:52.606487 1465110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 13:28:52.618517 1465110 docker.go:218] disabling cri-docker service (if available) ...
	I0929 13:28:52.618560 1465110 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 13:28:52.631757 1465110 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 13:28:52.644305 1465110 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 13:28:52.709227 1465110 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 13:28:52.775137 1465110 docker.go:234] disabling docker service ...
	I0929 13:28:52.775221 1465110 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 13:28:52.788059 1465110 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 13:28:52.799783 1465110 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 13:28:52.864439 1465110 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 13:28:52.929637 1465110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 13:28:52.941537 1465110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 13:28:52.958075 1465110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 13:28:52.968107 1465110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 13:28:52.978062 1465110 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0929 13:28:52.978121 1465110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0929 13:28:52.988006 1465110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 13:28:52.997660 1465110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 13:28:53.007646 1465110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 13:28:53.017544 1465110 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 13:28:53.026981 1465110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 13:28:53.037048 1465110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 13:28:53.047149 1465110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 13:28:53.057156 1465110 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 13:28:53.065634 1465110 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 13:28:53.074061 1465110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:28:53.138360 1465110 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 13:28:53.242303 1465110 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0929 13:28:53.242387 1465110 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0929 13:28:53.246564 1465110 start.go:563] Will wait 60s for crictl version
	I0929 13:28:53.246638 1465110 ssh_runner.go:195] Run: which crictl
	I0929 13:28:53.250428 1465110 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 13:28:53.285386 1465110 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.27
	RuntimeApiVersion:  v1
	I0929 13:28:53.285461 1465110 ssh_runner.go:195] Run: containerd --version
	I0929 13:28:53.311246 1465110 ssh_runner.go:195] Run: containerd --version
	I0929 13:28:53.337354 1465110 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
	I0929 13:28:53.338401 1465110 cli_runner.go:164] Run: docker network inspect newest-cni-740698 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:28:53.355730 1465110 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0929 13:28:53.360006 1465110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:28:53.373781 1465110 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0929 13:28:53.374806 1465110 kubeadm.go:875] updating cluster {Name:newest-cni-740698 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-740698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 13:28:53.374940 1465110 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0929 13:28:53.375027 1465110 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 13:28:53.409703 1465110 containerd.go:627] all images are preloaded for containerd runtime.
	I0929 13:28:53.409723 1465110 containerd.go:534] Images already preloaded, skipping extraction
	I0929 13:28:53.409781 1465110 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 13:28:53.446238 1465110 containerd.go:627] all images are preloaded for containerd runtime.
	I0929 13:28:53.446258 1465110 cache_images.go:85] Images are preloaded, skipping loading
	I0929 13:28:53.446266 1465110 kubeadm.go:926] updating node { 192.168.85.2 8443 v1.34.0 containerd true true} ...
	I0929 13:28:53.446366 1465110 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-740698 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:newest-cni-740698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 13:28:53.446423 1465110 ssh_runner.go:195] Run: sudo crictl info
	I0929 13:28:53.482332 1465110 cni.go:84] Creating CNI manager for ""
	I0929 13:28:53.482352 1465110 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0929 13:28:53.482361 1465110 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0929 13:28:53.482383 1465110 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-740698 NodeName:newest-cni-740698 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 13:28:53.482515 1465110 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-740698"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 13:28:53.482573 1465110 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 13:28:53.492790 1465110 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 13:28:53.492848 1465110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 13:28:53.502450 1465110 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0929 13:28:53.520767 1465110 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 13:28:53.541161 1465110 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I0929 13:28:53.559905 1465110 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0929 13:28:53.563697 1465110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:28:53.575325 1465110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:28:53.644353 1465110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:28:53.667523 1465110 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698 for IP: 192.168.85.2
	I0929 13:28:53.667547 1465110 certs.go:194] generating shared ca certs ...
	I0929 13:28:53.667566 1465110 certs.go:226] acquiring lock for ca certs: {Name:mk80f04796163f71154dbe6468cabd937b3d9c9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:28:53.667743 1465110 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.key
	I0929 13:28:53.667829 1465110 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/proxy-client-ca.key
	I0929 13:28:53.667849 1465110 certs.go:256] generating profile certs ...
	I0929 13:28:53.667989 1465110 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698/client.key
	I0929 13:28:53.668064 1465110 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698/apiserver.key.abc54583
	I0929 13:28:53.668121 1465110 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698/proxy-client.key
	I0929 13:28:53.668255 1465110 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/1101494.pem (1338 bytes)
	W0929 13:28:53.668287 1465110 certs.go:480] ignoring /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/1101494_empty.pem, impossibly tiny 0 bytes
	I0929 13:28:53.668299 1465110 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 13:28:53.668331 1465110 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/ca.pem (1078 bytes)
	I0929 13:28:53.668365 1465110 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/cert.pem (1123 bytes)
	I0929 13:28:53.668397 1465110 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/key.pem (1679 bytes)
	I0929 13:28:53.668454 1465110 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem (1708 bytes)
	I0929 13:28:53.669280 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 13:28:53.696915 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I0929 13:28:53.725095 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 13:28:53.759173 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 13:28:53.787114 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0929 13:28:53.812387 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0929 13:28:53.837455 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 13:28:53.864150 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/newest-cni-740698/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0929 13:28:53.892727 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/ssl/certs/11014942.pem --> /usr/share/ca-certificates/11014942.pem (1708 bytes)
	I0929 13:28:53.918653 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 13:28:53.944563 1465110 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1097891/.minikube/certs/1101494.pem --> /usr/share/ca-certificates/1101494.pem (1338 bytes)
	I0929 13:28:53.970121 1465110 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 13:28:53.988778 1465110 ssh_runner.go:195] Run: openssl version
	I0929 13:28:53.994749 1465110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11014942.pem && ln -fs /usr/share/ca-certificates/11014942.pem /etc/ssl/certs/11014942.pem"
	I0929 13:28:54.004933 1465110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11014942.pem
	I0929 13:28:54.008622 1465110 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 12:23 /usr/share/ca-certificates/11014942.pem
	I0929 13:28:54.008720 1465110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11014942.pem
	I0929 13:28:54.015872 1465110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11014942.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 13:28:54.025519 1465110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 13:28:54.035467 1465110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:28:54.039540 1465110 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 12:18 /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:28:54.039596 1465110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:28:54.047058 1465110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 13:28:54.056922 1465110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1101494.pem && ln -fs /usr/share/ca-certificates/1101494.pem /etc/ssl/certs/1101494.pem"
	I0929 13:28:54.066836 1465110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1101494.pem
	I0929 13:28:54.070330 1465110 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 12:23 /usr/share/ca-certificates/1101494.pem
	I0929 13:28:54.070369 1465110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1101494.pem
	I0929 13:28:54.077657 1465110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1101494.pem /etc/ssl/certs/51391683.0"
	I0929 13:28:54.087032 1465110 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 13:28:54.090728 1465110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 13:28:54.097689 1465110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 13:28:54.104122 1465110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 13:28:54.110565 1465110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 13:28:54.117388 1465110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 13:28:54.123946 1465110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 13:28:54.130630 1465110 kubeadm.go:392] StartCluster: {Name:newest-cni-740698 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-740698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:28:54.130735 1465110 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0929 13:28:54.130798 1465110 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 13:28:54.167363 1465110 cri.go:89] found id: "36fd60ad8b5f43506f08923872ee0aac518a04a1fbe0bd7231ed286722550d61"
	I0929 13:28:54.167386 1465110 cri.go:89] found id: "85a7d209414fd13547d72f843320321f35203a7a91f1d1d949bcbef472b56b42"
	I0929 13:28:54.167389 1465110 cri.go:89] found id: "7ad3eea1da3e1387bfbcdf334ec1f656e07e5923ea432d735dd02eb19cac0365"
	I0929 13:28:54.167392 1465110 cri.go:89] found id: "f52b4638fb9b2c69f79356c1efc63df5afbd181951d1758d972cc553ffbc5dba"
	I0929 13:28:54.167395 1465110 cri.go:89] found id: "9ea775d7aaeeeac021fb1008a123392683e633a680971b8a0c1d0ce312bb1530"
	I0929 13:28:54.167397 1465110 cri.go:89] found id: "e7cf3d47f09c990445369fe3081a61f3b660acc90dc8849ee297eefb91ad2462"
	I0929 13:28:54.167408 1465110 cri.go:89] found id: "00afc0aa272581aa861d734fd4eff7e4d7b47a8a679dd8459df8264c4766bf57"
	I0929 13:28:54.167411 1465110 cri.go:89] found id: ""
	I0929 13:28:54.167452 1465110 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0929 13:28:54.182182 1465110 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-29T13:28:54Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0929 13:28:54.182268 1465110 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 13:28:54.194234 1465110 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 13:28:54.194257 1465110 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 13:28:54.194302 1465110 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 13:28:54.207740 1465110 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 13:28:54.208661 1465110 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-740698" does not appear in /home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 13:28:54.209320 1465110 kubeconfig.go:62] /home/jenkins/minikube-integration/21652-1097891/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-740698" cluster setting kubeconfig missing "newest-cni-740698" context setting]
	I0929 13:28:54.210396 1465110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/kubeconfig: {Name:mk343611c88fd6ad36810bb377f9a0ca463784db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:28:54.213105 1465110 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 13:28:54.227105 1465110 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.85.2
	I0929 13:28:54.227188 1465110 kubeadm.go:593] duration metric: took 32.922621ms to restartPrimaryControlPlane
	I0929 13:28:54.227204 1465110 kubeadm.go:394] duration metric: took 96.582969ms to StartCluster
	I0929 13:28:54.227267 1465110 settings.go:142] acquiring lock: {Name:mk967ab7b412f5ea13a8bdbc3d08e00d0ec4417f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:28:54.227417 1465110 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 13:28:54.229069 1465110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1097891/kubeconfig: {Name:mk343611c88fd6ad36810bb377f9a0ca463784db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:28:54.229359 1465110 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0929 13:28:54.229589 1465110 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 13:28:54.229695 1465110 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-740698"
	I0929 13:28:54.229719 1465110 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-740698"
	I0929 13:28:54.229785 1465110 config.go:182] Loaded profile config "newest-cni-740698": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 13:28:54.229795 1465110 addons.go:69] Setting default-storageclass=true in profile "newest-cni-740698"
	I0929 13:28:54.229812 1465110 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-740698"
	I0929 13:28:54.229847 1465110 addons.go:69] Setting metrics-server=true in profile "newest-cni-740698"
	I0929 13:28:54.229863 1465110 addons.go:238] Setting addon metrics-server=true in "newest-cni-740698"
	W0929 13:28:54.229871 1465110 addons.go:247] addon metrics-server should already be in state true
	I0929 13:28:54.229908 1465110 host.go:66] Checking if "newest-cni-740698" exists ...
	W0929 13:28:54.229922 1465110 addons.go:247] addon storage-provisioner should already be in state true
	I0929 13:28:54.229954 1465110 host.go:66] Checking if "newest-cni-740698" exists ...
	I0929 13:28:54.230018 1465110 addons.go:69] Setting dashboard=true in profile "newest-cni-740698"
	I0929 13:28:54.230040 1465110 addons.go:238] Setting addon dashboard=true in "newest-cni-740698"
	W0929 13:28:54.230059 1465110 addons.go:247] addon dashboard should already be in state true
	I0929 13:28:54.230085 1465110 host.go:66] Checking if "newest-cni-740698" exists ...
	I0929 13:28:54.230479 1465110 cli_runner.go:164] Run: docker container inspect newest-cni-740698 --format={{.State.Status}}
	I0929 13:28:54.230614 1465110 cli_runner.go:164] Run: docker container inspect newest-cni-740698 --format={{.State.Status}}
	I0929 13:28:54.230497 1465110 cli_runner.go:164] Run: docker container inspect newest-cni-740698 --format={{.State.Status}}
	I0929 13:28:54.230854 1465110 cli_runner.go:164] Run: docker container inspect newest-cni-740698 --format={{.State.Status}}
	I0929 13:28:54.232563 1465110 out.go:179] * Verifying Kubernetes components...
	I0929 13:28:54.236717 1465110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:28:54.260047 1465110 addons.go:238] Setting addon default-storageclass=true in "newest-cni-740698"
	W0929 13:28:54.260075 1465110 addons.go:247] addon default-storageclass should already be in state true
	I0929 13:28:54.260109 1465110 host.go:66] Checking if "newest-cni-740698" exists ...
	I0929 13:28:54.260230 1465110 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 13:28:54.260637 1465110 cli_runner.go:164] Run: docker container inspect newest-cni-740698 --format={{.State.Status}}
	I0929 13:28:54.261415 1465110 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 13:28:54.261436 1465110 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 13:28:54.261589 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:54.263389 1465110 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 13:28:54.264722 1465110 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 13:28:54.264809 1465110 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:28:54.264925 1465110 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 13:28:54.265009 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:54.267056 1465110 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 13:28:54.267927 1465110 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 13:28:54.267947 1465110 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 13:28:54.268035 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:54.291623 1465110 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 13:28:54.291658 1465110 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 13:28:54.291740 1465110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-740698
	I0929 13:28:54.296013 1465110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33631 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/newest-cni-740698/id_rsa Username:docker}
	I0929 13:28:54.312876 1465110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33631 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/newest-cni-740698/id_rsa Username:docker}
	I0929 13:28:54.323245 1465110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33631 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/newest-cni-740698/id_rsa Username:docker}
	I0929 13:28:54.326917 1465110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33631 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/newest-cni-740698/id_rsa Username:docker}
	I0929 13:28:54.391936 1465110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:28:54.415344 1465110 api_server.go:52] waiting for apiserver process to appear ...
	I0929 13:28:54.415421 1465110 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 13:28:54.433355 1465110 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 13:28:54.433382 1465110 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 13:28:54.440764 1465110 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:28:54.458141 1465110 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 13:28:54.458173 1465110 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 13:28:54.465310 1465110 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 13:28:54.467810 1465110 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 13:28:54.467831 1465110 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 13:28:54.497250 1465110 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 13:28:54.497286 1465110 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 13:28:54.497309 1465110 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 13:28:54.497326 1465110 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 13:28:54.529254 1465110 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 13:28:54.529283 1465110 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 13:28:54.532022 1465110 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0929 13:28:54.538659 1465110 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 13:28:54.538708 1465110 retry.go:31] will retry after 306.798742ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 13:28:54.563503 1465110 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 13:28:54.563535 1465110 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0929 13:28:54.589041 1465110 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 13:28:54.589068 1465110 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0929 13:28:54.617100 1465110 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 13:28:54.617132 1465110 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0929 13:28:54.647917 1465110 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 13:28:54.647952 1465110 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 13:28:54.678004 1465110 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 13:28:54.678030 1465110 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 13:28:54.698888 1465110 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:28:54.698915 1465110 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 13:28:54.717149 1465110 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 13:28:54.845675 1465110 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:28:54.916311 1465110 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 13:28:56.202046 1465110 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.736694687s)
	I0929 13:28:56.651351 1465110 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.119282964s)
	I0929 13:28:56.651401 1465110 addons.go:479] Verifying addon metrics-server=true in "newest-cni-740698"
	I0929 13:28:56.651456 1465110 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.934266797s)
	I0929 13:28:56.652822 1465110 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-740698 addons enable metrics-server
	
	I0929 13:28:56.694191 1465110 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.848477294s)
	I0929 13:28:56.694272 1465110 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.777927453s)
	I0929 13:28:56.694309 1465110 api_server.go:72] duration metric: took 2.464920673s to wait for apiserver process to appear ...
	I0929 13:28:56.694400 1465110 api_server.go:88] waiting for apiserver healthz status ...
	I0929 13:28:56.694433 1465110 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:28:56.695636 1465110 out.go:179] * Enabled addons: default-storageclass, metrics-server, dashboard, storage-provisioner
	I0929 13:28:56.696536 1465110 addons.go:514] duration metric: took 2.466970901s for enable addons: enabled=[default-storageclass metrics-server dashboard storage-provisioner]
	I0929 13:28:56.698554 1465110 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:28:56.698577 1465110 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:28:57.195196 1465110 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:28:57.201196 1465110 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 13:28:57.201230 1465110 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 13:28:57.694973 1465110 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 13:28:57.699509 1465110 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0929 13:28:57.700692 1465110 api_server.go:141] control plane version: v1.34.0
	I0929 13:28:57.700722 1465110 api_server.go:131] duration metric: took 1.006311167s to wait for apiserver health ...
	I0929 13:28:57.700734 1465110 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 13:28:57.704357 1465110 system_pods.go:59] 9 kube-system pods found
	I0929 13:28:57.704397 1465110 system_pods.go:61] "coredns-66bc5c9577-g22nn" [9ef181d0-e9e8-4118-be6a-82c8fc1b9262] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 13:28:57.704407 1465110 system_pods.go:61] "etcd-newest-cni-740698" [5b7ff3a3-7c62-4c27-a650-b0b16bb740cb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 13:28:57.704420 1465110 system_pods.go:61] "kindnet-r7p4j" [35989a73-e8b9-4fc4-a0b9-e95c31cc7a61] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0929 13:28:57.704426 1465110 system_pods.go:61] "kube-apiserver-newest-cni-740698" [046eaa92-bda2-4f34-b0b9-38f5ca2aee74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 13:28:57.704460 1465110 system_pods.go:61] "kube-controller-manager-newest-cni-740698" [cc43f598-9fe3-421f-82bb-bc03f0b6022a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 13:28:57.704470 1465110 system_pods.go:61] "kube-proxy-2csmd" [3abee784-525d-4f16-91f3-83a6f4b2a704] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 13:28:57.704476 1465110 system_pods.go:61] "kube-scheduler-newest-cni-740698" [6f9039a5-c310-4969-9b07-e6e854a43e38] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 13:28:57.704483 1465110 system_pods.go:61] "metrics-server-746fcd58dc-8n4ts" [5808bc7a-eba9-4a8a-b4f8-8a6218c7dc57] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 13:28:57.704487 1465110 system_pods.go:61] "storage-provisioner" [8b39193c-2d16-4945-bc4e-3b8931f63fff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 13:28:57.704507 1465110 system_pods.go:74] duration metric: took 3.765055ms to wait for pod list to return data ...
	I0929 13:28:57.704517 1465110 default_sa.go:34] waiting for default service account to be created ...
	I0929 13:28:57.706811 1465110 default_sa.go:45] found service account: "default"
	I0929 13:28:57.706831 1465110 default_sa.go:55] duration metric: took 2.307161ms for default service account to be created ...
	I0929 13:28:57.706843 1465110 kubeadm.go:578] duration metric: took 3.477454114s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0929 13:28:57.706858 1465110 node_conditions.go:102] verifying NodePressure condition ...
	I0929 13:28:57.709210 1465110 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 13:28:57.709235 1465110 node_conditions.go:123] node cpu capacity is 8
	I0929 13:28:57.709248 1465110 node_conditions.go:105] duration metric: took 2.385873ms to run NodePressure ...
	I0929 13:28:57.709259 1465110 start.go:241] waiting for startup goroutines ...
	I0929 13:28:57.709266 1465110 start.go:246] waiting for cluster config update ...
	I0929 13:28:57.709279 1465110 start.go:255] writing updated cluster config ...
	I0929 13:28:57.709560 1465110 ssh_runner.go:195] Run: rm -f paused
	I0929 13:28:57.759423 1465110 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 13:28:57.761826 1465110 out.go:179] * Done! kubectl is now configured to use "newest-cni-740698" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	fe748096a2f83       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   8                   3d7279b20feea       dashboard-metrics-scraper-6ffb444bf9-84lcd
	c8f1618447982       6e38f40d628db       17 minutes ago      Running             storage-provisioner         2                   4f564be2e9c80       storage-provisioner
	0d00a3a87a9fb       409467f978b4a       18 minutes ago      Running             kindnet-cni                 1                   664c6881551ea       kindnet-mg2cv
	0b7191296275e       56cc512116c8f       18 minutes ago      Running             busybox                     1                   b37588f80b53b       busybox
	d9945b759e2bc       52546a367cc9e       18 minutes ago      Running             coredns                     1                   aae4effd190be       coredns-66bc5c9577-cw5kk
	f35afada4e61f       6e38f40d628db       18 minutes ago      Exited              storage-provisioner         1                   4f564be2e9c80       storage-provisioner
	48f6c459d0116       df0860106674d       18 minutes ago      Running             kube-proxy                  1                   6974bcf63981d       kube-proxy-pttl4
	4565d63f5f335       46169d968e920       18 minutes ago      Running             kube-scheduler              1                   63da49474e353       kube-scheduler-default-k8s-diff-port-625526
	6c1f8254a812f       90550c43ad2bc       18 minutes ago      Running             kube-apiserver              1                   effda1d63acfa       kube-apiserver-default-k8s-diff-port-625526
	85f6bdd750a32       a0af72f2ec6d6       18 minutes ago      Running             kube-controller-manager     1                   658ff23d6ca22       kube-controller-manager-default-k8s-diff-port-625526
	83fe60c28580b       5f1f5298c888d       18 minutes ago      Running             etcd                        1                   e63cfe797d86e       etcd-default-k8s-diff-port-625526
	0cbeffbe3ba08       56cc512116c8f       19 minutes ago      Exited              busybox                     0                   904c1099a0424       busybox
	52dd411a810c3       52546a367cc9e       19 minutes ago      Exited              coredns                     0                   1749fc5a880c8       coredns-66bc5c9577-cw5kk
	8730db506be53       409467f978b4a       19 minutes ago      Exited              kindnet-cni                 0                   279e889171f08       kindnet-mg2cv
	18033609a785b       df0860106674d       19 minutes ago      Exited              kube-proxy                  0                   995f690973bc5       kube-proxy-pttl4
	83c9e3b96e2a5       5f1f5298c888d       19 minutes ago      Exited              etcd                        0                   6af02ebc92fb2       etcd-default-k8s-diff-port-625526
	bc7b53b8e499d       46169d968e920       19 minutes ago      Exited              kube-scheduler              0                   2af6e8b010833       kube-scheduler-default-k8s-diff-port-625526
	54662278673f5       90550c43ad2bc       19 minutes ago      Exited              kube-apiserver              0                   ee3d207429f16       kube-apiserver-default-k8s-diff-port-625526
	8154dc34f513a       a0af72f2ec6d6       19 minutes ago      Exited              kube-controller-manager     0                   06d2a0d3b0b98       kube-controller-manager-default-k8s-diff-port-625526
	
	
	==> containerd <==
	Sep 29 13:33:49 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:33:49.549603180Z" level=info msg="RemoveContainer for \"cce75c93c5bec92e3b23903fe8b7abad5a5ddc0763aa7ef3e9530d8a52589a77\" returns successfully"
	Sep 29 13:34:00 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:34:00.793390689Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 29 13:34:00 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:34:00.795098677Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 13:34:01 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:34:01.485978045Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 13:34:03 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:34:03.342243721Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 13:34:03 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:34:03.342287254Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Sep 29 13:38:53 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:38:53.793937202Z" level=info msg="CreateContainer within sandbox \"3d7279b20feea9ce4d218c39eaabdaac7e5fc7adac5492cea48113c78c8e896a\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,}"
	Sep 29 13:38:53 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:38:53.803722416Z" level=info msg="CreateContainer within sandbox \"3d7279b20feea9ce4d218c39eaabdaac7e5fc7adac5492cea48113c78c8e896a\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"fe748096a2f83a4bc3985491e7cb3100a07821dd26a992454ebc26bd50967a9e\""
	Sep 29 13:38:53 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:38:53.804258473Z" level=info msg="StartContainer for \"fe748096a2f83a4bc3985491e7cb3100a07821dd26a992454ebc26bd50967a9e\""
	Sep 29 13:38:53 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:38:53.859667399Z" level=info msg="StartContainer for \"fe748096a2f83a4bc3985491e7cb3100a07821dd26a992454ebc26bd50967a9e\" returns successfully"
	Sep 29 13:38:53 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:38:53.872915338Z" level=info msg="received exit event container_id:\"fe748096a2f83a4bc3985491e7cb3100a07821dd26a992454ebc26bd50967a9e\"  id:\"fe748096a2f83a4bc3985491e7cb3100a07821dd26a992454ebc26bd50967a9e\"  pid:3392  exit_status:1  exited_at:{seconds:1759153133  nanos:872641322}"
	Sep 29 13:38:53 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:38:53.895253804Z" level=info msg="shim disconnected" id=fe748096a2f83a4bc3985491e7cb3100a07821dd26a992454ebc26bd50967a9e namespace=k8s.io
	Sep 29 13:38:53 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:38:53.895287296Z" level=warning msg="cleaning up after shim disconnected" id=fe748096a2f83a4bc3985491e7cb3100a07821dd26a992454ebc26bd50967a9e namespace=k8s.io
	Sep 29 13:38:53 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:38:53.895295074Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 29 13:38:54 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:38:54.283290224Z" level=info msg="RemoveContainer for \"e10068a2ba905f8dede0dcd729a731f1c06b958e2747c591963e5392b2f5c884\""
	Sep 29 13:38:54 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:38:54.287291950Z" level=info msg="RemoveContainer for \"e10068a2ba905f8dede0dcd729a731f1c06b958e2747c591963e5392b2f5c884\" returns successfully"
	Sep 29 13:38:57 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:38:57.792605384Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 29 13:38:57 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:38:57.842531513Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Sep 29 13:38:57 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:38:57.843925547Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 13:38:57 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:38:57.843980367Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 29 13:39:12 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:39:12.793142619Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 29 13:39:12 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:39:12.795144905Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 13:39:13 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:39:13.450924284Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 29 13:39:15 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:39:15.307098280Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 13:39:15 default-k8s-diff-port-625526 containerd[478]: time="2025-09-29T13:39:15.307148947Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	
	
	==> coredns [52dd411a810c3fa94118369009907067df080cbb16971d73b517e71e120a8c3e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55456 - 48772 "HINFO IN 5389282265455834246.5178903847650300258. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019543071s
	
	
	==> coredns [d9945b759e2bc56d4428e1d58ed3e13b62e93ce042d0d9472c533c69760f9ea6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58890 - 8999 "HINFO IN 1981937289594726429.3436502221074087452. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029862865s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-625526
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-625526
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=default-k8s-diff-port-625526
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T13_21_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 13:21:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-625526
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 13:41:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 13:40:33 +0000   Mon, 29 Sep 2025 13:21:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 13:40:33 +0000   Mon, 29 Sep 2025 13:21:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 13:40:33 +0000   Mon, 29 Sep 2025 13:21:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 13:40:33 +0000   Mon, 29 Sep 2025 13:21:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-625526
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 f2d034acb7334190866ba8b59be7bc8a
	  System UUID:                74d2e09c-06ad-43f6-ada3-bae8445e15be
	  Boot ID:                    c950b162-3ea4-4410-8c2e-1238f18b29b9
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.27
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-cw5kk                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-default-k8s-diff-port-625526                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-mg2cv                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-default-k8s-diff-port-625526             250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-625526    200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-pttl4                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-default-k8s-diff-port-625526             100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-746fcd58dc-k2ghw                         100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         19m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-84lcd              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-djsjk                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node default-k8s-diff-port-625526 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node default-k8s-diff-port-625526 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node default-k8s-diff-port-625526 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node default-k8s-diff-port-625526 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node default-k8s-diff-port-625526 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m                kubelet          Node default-k8s-diff-port-625526 status is now: NodeHasSufficientPID
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           19m                node-controller  Node default-k8s-diff-port-625526 event: Registered Node default-k8s-diff-port-625526 in Controller
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-625526 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-625526 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node default-k8s-diff-port-625526 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node default-k8s-diff-port-625526 event: Registered Node default-k8s-diff-port-625526 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 a1 f4 28 81 a8 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2e 2f bb 72 d0 bd 08 06
	[  +6.778142] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 83 71 a8 41 1d 08 06
	[  +0.096747] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 43 49 e5 fd fa 08 06
	[Sep29 13:07] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 2d 17 7b b6 88 08 06
	[  +0.000371] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5a 43 49 e5 fd fa 08 06
	[ +37.870699] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 36 61 5e 36 d0 11 08 06
	[Sep29 13:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 3c ea 5f b8 68 08 06
	[  +0.009082] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 a0 7d 1d f4 ea 08 06
	[ +10.861380] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 60 01 bb bd e5 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 61 5e 36 d0 11 08 06
	[ +36.402844] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 73 32 f4 f1 e6 08 06
	[  +0.000316] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 3c ea 5f b8 68 08 06
	
	
	==> etcd [83c9e3b96e2a5728a639dfbb2fdbc7cf856add21b860c97b648938e6beba8b60] <==
	{"level":"warn","ts":"2025-09-29T13:21:47.055260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.062932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.070313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.079337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.086881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.095041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.102550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.109601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.119177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.125410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.132986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.147388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.155452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.163707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.171159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.178637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.187235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.199989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.208577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.220366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.226749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.242255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.248843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.255651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:21:47.307112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50364","server-name":"","error":"EOF"}
	
	
	==> etcd [83fe60c28580bedd1ece217daf26964b49960bf03577707087864f854652b995] <==
	{"level":"warn","ts":"2025-09-29T13:22:43.754076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.761469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.768784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.783322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.789585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.803254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.810487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.817796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.824744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.831734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.838566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.844926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.851606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.866773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.873171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.879805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:22:43.928828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45332","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T13:22:54.981539Z","caller":"traceutil/trace.go:172","msg":"trace[1173468909] transaction","detail":"{read_only:false; response_revision:679; number_of_response:1; }","duration":"100.964185ms","start":"2025-09-29T13:22:54.880553Z","end":"2025-09-29T13:22:54.981517Z","steps":["trace[1173468909] 'process raft request'  (duration: 100.847985ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T13:28:23.516567Z","caller":"traceutil/trace.go:172","msg":"trace[1788374265] transaction","detail":"{read_only:false; response_revision:1107; number_of_response:1; }","duration":"220.18857ms","start":"2025-09-29T13:28:23.296362Z","end":"2025-09-29T13:28:23.516551Z","steps":["trace[1788374265] 'process raft request'  (duration: 220.036393ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T13:32:43.415001Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1066}
	{"level":"info","ts":"2025-09-29T13:32:43.434501Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1066,"took":"19.111521ms","hash":1175765358,"current-db-size-bytes":3256320,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1372160,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-09-29T13:32:43.434590Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1175765358,"revision":1066,"compact-revision":-1}
	{"level":"info","ts":"2025-09-29T13:37:43.419948Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1324}
	{"level":"info","ts":"2025-09-29T13:37:43.422730Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1324,"took":"2.437056ms","hash":3247222850,"current-db-size-bytes":3256320,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1826816,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-09-29T13:37:43.422769Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3247222850,"revision":1324,"compact-revision":1066}
	
	
	==> kernel <==
	 13:41:25 up  6:23,  0 users,  load average: 0.04, 0.14, 0.55
	Linux default-k8s-diff-port-625526 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [0d00a3a87a9fb58a74b077f501c5e6d682875dffbb59dfa4714a04ec2f5cae3d] <==
	I0929 13:39:16.019737       1 main.go:301] handling current node
	I0929 13:39:26.017770       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:39:26.017823       1 main.go:301] handling current node
	I0929 13:39:36.026037       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:39:36.026079       1 main.go:301] handling current node
	I0929 13:39:46.020806       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:39:46.020849       1 main.go:301] handling current node
	I0929 13:39:56.017475       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:39:56.017520       1 main.go:301] handling current node
	I0929 13:40:06.026540       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:40:06.026572       1 main.go:301] handling current node
	I0929 13:40:16.019761       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:40:16.019802       1 main.go:301] handling current node
	I0929 13:40:26.021201       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:40:26.021245       1 main.go:301] handling current node
	I0929 13:40:36.018621       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:40:36.018654       1 main.go:301] handling current node
	I0929 13:40:46.019819       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:40:46.019866       1 main.go:301] handling current node
	I0929 13:40:56.020863       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:40:56.020902       1 main.go:301] handling current node
	I0929 13:41:06.020060       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:41:06.020093       1 main.go:301] handling current node
	I0929 13:41:16.025124       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:41:16.025162       1 main.go:301] handling current node
	
	
	==> kindnet [8730db506be53d603d6e5354998b77fdfd5825608b87a598d9e5040b46cbeab7] <==
	I0929 13:21:56.825783       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0929 13:21:56.826084       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0929 13:21:56.826271       1 main.go:148] setting mtu 1500 for CNI 
	I0929 13:21:56.826290       1 main.go:178] kindnetd IP family: "ipv4"
	I0929 13:21:56.826319       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-29T13:21:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0929 13:21:57.090617       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0929 13:21:57.090651       1 controller.go:381] "Waiting for informer caches to sync"
	I0929 13:21:57.090671       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0929 13:21:57.090829       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0929 13:21:57.491111       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0929 13:21:57.491144       1 metrics.go:72] Registering metrics
	I0929 13:21:57.491243       1 controller.go:711] "Syncing nftables rules"
	I0929 13:22:07.091042       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:22:07.091134       1 main.go:301] handling current node
	I0929 13:22:17.090893       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 13:22:17.090953       1 main.go:301] handling current node
	
	
	==> kube-apiserver [54662278673f59b001060e69dfff2d0a1b8da29b92acfc75fd75584fc542ad3f] <==
	I0929 13:21:50.371949       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0929 13:21:50.379088       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0929 13:21:55.237698       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0929 13:21:55.487382       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 13:21:55.490779       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 13:21:55.635801       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E0929 13:22:22.560626       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:51802: use of closed network connection
	I0929 13:22:23.232199       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0929 13:22:23.236627       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:22:23.236685       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0929 13:22:23.236733       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0929 13:22:23.293841       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.102.35.208"}
	W0929 13:22:23.299083       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:22:23.299141       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0929 13:22:23.303375       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:22:23.303429       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-apiserver [6c1f8254a812f1d28c95fdcc3487caf26368c4bcbe20d30b1f4d3694871a3866] <==
	I0929 13:37:45.338442       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:38:09.877366       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:38:12.146645       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 13:38:45.337512       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:38:45.337569       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 13:38:45.337583       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 13:38:45.338671       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:38:45.338757       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 13:38:45.338776       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:39:29.950225       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:39:30.076477       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 13:40:45.338271       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:40:45.338330       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 13:40:45.338345       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 13:40:45.339378       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 13:40:45.339469       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 13:40:45.339486       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 13:40:51.113368       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:40:52.408616       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [8154dc34f513ae97cc439160f222da92653f764331ca537a3530fddeaf8f1933] <==
	I0929 13:21:54.802733       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 13:21:54.812007       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 13:21:54.832159       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 13:21:54.832173       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 13:21:54.832178       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 13:21:54.833081       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0929 13:21:54.833209       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0929 13:21:54.833326       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 13:21:54.833334       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0929 13:21:54.834325       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 13:21:54.834332       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 13:21:54.834364       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 13:21:54.834416       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0929 13:21:54.834419       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0929 13:21:54.834526       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 13:21:54.834547       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0929 13:21:54.834595       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 13:21:54.835374       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 13:21:54.835474       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 13:21:54.835504       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0929 13:21:54.838935       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0929 13:21:54.838987       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 13:21:54.839013       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 13:21:54.846377       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 13:21:54.858888       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [85f6bdd750a3279ed665d7d1e07d798cf712246f8c692be7738b3d26f25eb460] <==
	I0929 13:35:18.940800       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:35:48.843042       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:35:48.947605       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:36:18.846996       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:36:18.954089       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:36:48.851851       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:36:48.960406       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:37:18.856217       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:37:18.967090       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:37:48.860189       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:37:48.974465       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:38:18.864264       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:38:18.981560       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:38:48.869401       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:38:48.987816       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:39:18.873923       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:39:18.993843       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:39:48.878227       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:39:49.001172       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:40:18.882005       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:40:19.008391       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:40:48.886680       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:40:49.015401       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 13:41:18.890930       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 13:41:19.022242       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [18033609a785b3ef0f90fccb847276575ac648cbaec8ca8696ccd7a559d0ec57] <==
	I0929 13:21:56.177173       1 server_linux.go:53] "Using iptables proxy"
	I0929 13:21:56.235812       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 13:21:56.336728       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 13:21:56.336775       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0929 13:21:56.336859       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 13:21:56.362194       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 13:21:56.362272       1 server_linux.go:132] "Using iptables Proxier"
	I0929 13:21:56.368906       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 13:21:56.369727       1 server.go:527] "Version info" version="v1.34.0"
	I0929 13:21:56.369750       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:21:56.372113       1 config.go:200] "Starting service config controller"
	I0929 13:21:56.372135       1 config.go:106] "Starting endpoint slice config controller"
	I0929 13:21:56.372142       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 13:21:56.372165       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 13:21:56.372277       1 config.go:309] "Starting node config controller"
	I0929 13:21:56.372287       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 13:21:56.372568       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 13:21:56.372578       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 13:21:56.472818       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 13:21:56.472869       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 13:21:56.472827       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 13:21:56.472869       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [48f6c459d01167e4cc1acd43c7203d2da52af019cff8c87311cfcd9180c70900] <==
	I0929 13:22:45.363460       1 server_linux.go:53] "Using iptables proxy"
	I0929 13:22:45.425201       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 13:22:45.526403       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 13:22:45.526444       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0929 13:22:45.526559       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 13:22:45.552317       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 13:22:45.552391       1 server_linux.go:132] "Using iptables Proxier"
	I0929 13:22:45.557750       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 13:22:45.558227       1 server.go:527] "Version info" version="v1.34.0"
	I0929 13:22:45.558266       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:22:45.559909       1 config.go:200] "Starting service config controller"
	I0929 13:22:45.559989       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 13:22:45.560011       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 13:22:45.560042       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 13:22:45.560073       1 config.go:309] "Starting node config controller"
	I0929 13:22:45.560084       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 13:22:45.560091       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 13:22:45.560099       1 config.go:106] "Starting endpoint slice config controller"
	I0929 13:22:45.560106       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 13:22:45.660202       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 13:22:45.660197       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 13:22:45.660226       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [4565d63f5f3359ed34daa172ad16f71853ccb0d8e3e878f6804de1e2cf087c65] <==
	I0929 13:22:43.054547       1 serving.go:386] Generated self-signed cert in-memory
	W0929 13:22:44.319693       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 13:22:44.319732       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0929 13:22:44.319744       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 13:22:44.319753       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 13:22:44.345155       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 13:22:44.345188       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:22:44.348255       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:22:44.348296       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:22:44.348680       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 13:22:44.348957       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 13:22:44.448456       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [bc7b53b8e499d75c0a104765688d458b2210e7543d2d10f7764511a09984e08f] <==
	E0929 13:21:47.802671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 13:21:47.802807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 13:21:47.802535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 13:21:47.803161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 13:21:47.803158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 13:21:47.803181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 13:21:47.803303       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 13:21:47.803345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 13:21:47.803485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 13:21:47.803532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 13:21:47.803536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 13:21:47.803597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 13:21:47.803716       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 13:21:48.682512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 13:21:48.762134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 13:21:48.788339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 13:21:48.790353       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 13:21:48.796360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 13:21:48.813890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 13:21:48.840383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0929 13:21:48.858666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 13:21:48.961133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 13:21:48.977183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 13:21:48.991348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I0929 13:21:51.799835       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 13:40:06 default-k8s-diff-port-625526 kubelet[609]: E0929 13:40:06.792157     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-djsjk" podUID="23e361a9-ad69-4d5f-a704-eac5d4a77060"
	Sep 29 13:40:14 default-k8s-diff-port-625526 kubelet[609]: I0929 13:40:14.791362     609 scope.go:117] "RemoveContainer" containerID="fe748096a2f83a4bc3985491e7cb3100a07821dd26a992454ebc26bd50967a9e"
	Sep 29 13:40:14 default-k8s-diff-port-625526 kubelet[609]: E0929 13:40:14.791517     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-84lcd_kubernetes-dashboard(daf441ed-0e6e-4b4d-982d-9d5c9a9a9f3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84lcd" podUID="daf441ed-0e6e-4b4d-982d-9d5c9a9a9f3a"
	Sep 29 13:40:14 default-k8s-diff-port-625526 kubelet[609]: E0929 13:40:14.792122     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-k2ghw" podUID="c11a1fa7-c21f-47af-980f-7b1b08f6cf57"
	Sep 29 13:40:18 default-k8s-diff-port-625526 kubelet[609]: E0929 13:40:18.792572     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-djsjk" podUID="23e361a9-ad69-4d5f-a704-eac5d4a77060"
	Sep 29 13:40:26 default-k8s-diff-port-625526 kubelet[609]: I0929 13:40:26.792021     609 scope.go:117] "RemoveContainer" containerID="fe748096a2f83a4bc3985491e7cb3100a07821dd26a992454ebc26bd50967a9e"
	Sep 29 13:40:26 default-k8s-diff-port-625526 kubelet[609]: E0929 13:40:26.792242     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-84lcd_kubernetes-dashboard(daf441ed-0e6e-4b4d-982d-9d5c9a9a9f3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84lcd" podUID="daf441ed-0e6e-4b4d-982d-9d5c9a9a9f3a"
	Sep 29 13:40:29 default-k8s-diff-port-625526 kubelet[609]: E0929 13:40:29.792401     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-k2ghw" podUID="c11a1fa7-c21f-47af-980f-7b1b08f6cf57"
	Sep 29 13:40:30 default-k8s-diff-port-625526 kubelet[609]: E0929 13:40:30.792387     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-djsjk" podUID="23e361a9-ad69-4d5f-a704-eac5d4a77060"
	Sep 29 13:40:37 default-k8s-diff-port-625526 kubelet[609]: I0929 13:40:37.791788     609 scope.go:117] "RemoveContainer" containerID="fe748096a2f83a4bc3985491e7cb3100a07821dd26a992454ebc26bd50967a9e"
	Sep 29 13:40:37 default-k8s-diff-port-625526 kubelet[609]: E0929 13:40:37.792029     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-84lcd_kubernetes-dashboard(daf441ed-0e6e-4b4d-982d-9d5c9a9a9f3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84lcd" podUID="daf441ed-0e6e-4b4d-982d-9d5c9a9a9f3a"
	Sep 29 13:40:41 default-k8s-diff-port-625526 kubelet[609]: E0929 13:40:41.793281     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-k2ghw" podUID="c11a1fa7-c21f-47af-980f-7b1b08f6cf57"
	Sep 29 13:40:45 default-k8s-diff-port-625526 kubelet[609]: E0929 13:40:45.792333     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-djsjk" podUID="23e361a9-ad69-4d5f-a704-eac5d4a77060"
	Sep 29 13:40:50 default-k8s-diff-port-625526 kubelet[609]: I0929 13:40:50.791444     609 scope.go:117] "RemoveContainer" containerID="fe748096a2f83a4bc3985491e7cb3100a07821dd26a992454ebc26bd50967a9e"
	Sep 29 13:40:50 default-k8s-diff-port-625526 kubelet[609]: E0929 13:40:50.791750     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-84lcd_kubernetes-dashboard(daf441ed-0e6e-4b4d-982d-9d5c9a9a9f3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84lcd" podUID="daf441ed-0e6e-4b4d-982d-9d5c9a9a9f3a"
	Sep 29 13:40:53 default-k8s-diff-port-625526 kubelet[609]: E0929 13:40:53.792851     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-k2ghw" podUID="c11a1fa7-c21f-47af-980f-7b1b08f6cf57"
	Sep 29 13:40:59 default-k8s-diff-port-625526 kubelet[609]: E0929 13:40:59.792767     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-djsjk" podUID="23e361a9-ad69-4d5f-a704-eac5d4a77060"
	Sep 29 13:41:05 default-k8s-diff-port-625526 kubelet[609]: I0929 13:41:05.792012     609 scope.go:117] "RemoveContainer" containerID="fe748096a2f83a4bc3985491e7cb3100a07821dd26a992454ebc26bd50967a9e"
	Sep 29 13:41:05 default-k8s-diff-port-625526 kubelet[609]: E0929 13:41:05.792192     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-84lcd_kubernetes-dashboard(daf441ed-0e6e-4b4d-982d-9d5c9a9a9f3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84lcd" podUID="daf441ed-0e6e-4b4d-982d-9d5c9a9a9f3a"
	Sep 29 13:41:08 default-k8s-diff-port-625526 kubelet[609]: E0929 13:41:08.792417     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-k2ghw" podUID="c11a1fa7-c21f-47af-980f-7b1b08f6cf57"
	Sep 29 13:41:12 default-k8s-diff-port-625526 kubelet[609]: E0929 13:41:12.792923     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-djsjk" podUID="23e361a9-ad69-4d5f-a704-eac5d4a77060"
	Sep 29 13:41:20 default-k8s-diff-port-625526 kubelet[609]: I0929 13:41:20.791982     609 scope.go:117] "RemoveContainer" containerID="fe748096a2f83a4bc3985491e7cb3100a07821dd26a992454ebc26bd50967a9e"
	Sep 29 13:41:20 default-k8s-diff-port-625526 kubelet[609]: E0929 13:41:20.792187     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-84lcd_kubernetes-dashboard(daf441ed-0e6e-4b4d-982d-9d5c9a9a9f3a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84lcd" podUID="daf441ed-0e6e-4b4d-982d-9d5c9a9a9f3a"
	Sep 29 13:41:22 default-k8s-diff-port-625526 kubelet[609]: E0929 13:41:22.792321     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-k2ghw" podUID="c11a1fa7-c21f-47af-980f-7b1b08f6cf57"
	Sep 29 13:41:24 default-k8s-diff-port-625526 kubelet[609]: E0929 13:41:24.792547     609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-djsjk" podUID="23e361a9-ad69-4d5f-a704-eac5d4a77060"
	
	
	==> storage-provisioner [c8f16184479829556ddc7db6b506d293b04a05ec25401f6eecaad912ee8bbfc6] <==
	W0929 13:41:00.402130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:41:02.405396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:41:02.409322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:41:04.412214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:41:04.416308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:41:06.420038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:41:06.423945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:41:08.426905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:41:08.430857       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:41:10.433618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:41:10.438377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:41:12.441512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:41:12.445524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:41:14.448364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:41:14.452476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:41:16.456254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:41:16.460012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:41:18.462946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:41:18.467745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:41:20.470061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:41:20.473831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:41:22.476734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:41:22.480899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:41:24.483909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:41:24.489653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f35afada4e61fdd48d693176281536f8e2f82890bff5b998c13d62cb304dd982] <==
	I0929 13:22:45.332798       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 13:23:15.336894       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-625526 -n default-k8s-diff-port-625526
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-625526 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-k2ghw kubernetes-dashboard-855c9754f9-djsjk
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-625526 describe pod metrics-server-746fcd58dc-k2ghw kubernetes-dashboard-855c9754f9-djsjk
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-625526 describe pod metrics-server-746fcd58dc-k2ghw kubernetes-dashboard-855c9754f9-djsjk: exit status 1 (59.140472ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-k2ghw" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-djsjk" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-625526 describe pod metrics-server-746fcd58dc-k2ghw kubernetes-dashboard-855c9754f9-djsjk: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.88s)

                                                
                                    

Test pass (280/325)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 13.63
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.0/json-events 13.69
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.06
18 TestDownloadOnly/v1.34.0/DeleteAll 0.2
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.12
20 TestDownloadOnlyKic 1.16
21 TestBinaryMirror 0.79
22 TestOffline 58.98
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 154.85
29 TestAddons/serial/Volcano 39.66
31 TestAddons/serial/GCPAuth/Namespaces 0.11
32 TestAddons/serial/GCPAuth/FakeCredentials 9.46
35 TestAddons/parallel/Registry 19.03
36 TestAddons/parallel/RegistryCreds 0.72
37 TestAddons/parallel/Ingress 20.9
38 TestAddons/parallel/InspektorGadget 5.27
39 TestAddons/parallel/MetricsServer 5.72
41 TestAddons/parallel/CSI 46.6
42 TestAddons/parallel/Headlamp 18.57
43 TestAddons/parallel/CloudSpanner 5.53
44 TestAddons/parallel/LocalPath 10.36
45 TestAddons/parallel/NvidiaDevicePlugin 6.64
46 TestAddons/parallel/Yakd 10.74
47 TestAddons/parallel/AmdGpuDevicePlugin 5.48
48 TestAddons/StoppedEnableDisable 12.61
49 TestCertOptions 30.55
50 TestCertExpiration 220.26
52 TestForceSystemdFlag 24.95
53 TestForceSystemdEnv 33.83
55 TestKVMDriverInstallOrUpdate 1.05
59 TestErrorSpam/setup 19.92
60 TestErrorSpam/start 0.61
61 TestErrorSpam/status 0.9
62 TestErrorSpam/pause 1.45
63 TestErrorSpam/unpause 1.5
64 TestErrorSpam/stop 1.93
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 46.78
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 6.31
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.06
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.88
76 TestFunctional/serial/CacheCmd/cache/add_local 2.1
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.64
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 40.66
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.43
87 TestFunctional/serial/LogsFileCmd 1.46
88 TestFunctional/serial/InvalidService 4.23
90 TestFunctional/parallel/ConfigCmd 0.35
92 TestFunctional/parallel/DryRun 0.35
93 TestFunctional/parallel/InternationalLanguage 0.15
94 TestFunctional/parallel/StatusCmd 0.91
99 TestFunctional/parallel/AddonsCmd 0.13
102 TestFunctional/parallel/SSHCmd 0.55
103 TestFunctional/parallel/CpCmd 1.7
105 TestFunctional/parallel/FileSync 0.26
106 TestFunctional/parallel/CertSync 1.57
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.56
114 TestFunctional/parallel/License 0.4
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.41
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/Version/short 0.05
127 TestFunctional/parallel/Version/components 0.51
128 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
129 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
130 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
131 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
132 TestFunctional/parallel/ImageCommands/ImageBuild 3.27
133 TestFunctional/parallel/ImageCommands/Setup 1.98
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.09
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.02
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.94
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.45
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.6
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
141 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
142 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
143 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
144 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
145 TestFunctional/parallel/ProfileCmd/profile_list 0.37
146 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
147 TestFunctional/parallel/MountCmd/any-port 6.7
148 TestFunctional/parallel/MountCmd/specific-port 1.75
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.84
150 TestFunctional/parallel/ServiceCmd/List 1.69
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.71
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 102.25
163 TestMultiControlPlane/serial/DeployApp 47.35
164 TestMultiControlPlane/serial/PingHostFromPods 1.13
165 TestMultiControlPlane/serial/AddWorkerNode 12.65
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.93
168 TestMultiControlPlane/serial/CopyFile 17.07
169 TestMultiControlPlane/serial/StopSecondaryNode 12.65
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.71
171 TestMultiControlPlane/serial/RestartSecondaryNode 9.17
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.86
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 94.53
174 TestMultiControlPlane/serial/DeleteSecondaryNode 9.06
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.67
176 TestMultiControlPlane/serial/StopCluster 35.87
177 TestMultiControlPlane/serial/RestartCluster 56.31
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.67
179 TestMultiControlPlane/serial/AddSecondaryNode 25.69
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.9
184 TestJSONOutput/start/Command 42.39
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.66
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 0.61
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 5.72
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.21
209 TestKicCustomNetwork/create_custom_network 34.92
210 TestKicCustomNetwork/use_default_bridge_network 23.02
211 TestKicExistingNetwork 26.14
212 TestKicCustomSubnet 24.96
213 TestKicStaticIP 24.83
214 TestMainNoArgs 0.05
215 TestMinikubeProfile 47.23
218 TestMountStart/serial/StartWithMountFirst 6
219 TestMountStart/serial/VerifyMountFirst 0.26
220 TestMountStart/serial/StartWithMountSecond 6.38
221 TestMountStart/serial/VerifyMountSecond 0.27
222 TestMountStart/serial/DeleteFirst 1.66
223 TestMountStart/serial/VerifyMountPostDelete 0.27
224 TestMountStart/serial/Stop 1.19
225 TestMountStart/serial/RestartStopped 7.16
226 TestMountStart/serial/VerifyMountPostStop 0.26
229 TestMultiNode/serial/FreshStart2Nodes 52.22
230 TestMultiNode/serial/DeployApp2Nodes 16.31
231 TestMultiNode/serial/PingHostFrom2Pods 0.77
232 TestMultiNode/serial/AddNode 11.71
233 TestMultiNode/serial/MultiNodeLabels 0.06
234 TestMultiNode/serial/ProfileList 0.66
235 TestMultiNode/serial/CopyFile 9.58
236 TestMultiNode/serial/StopNode 2.21
237 TestMultiNode/serial/StartAfterStop 6.98
238 TestMultiNode/serial/RestartKeepsNodes 71.02
239 TestMultiNode/serial/DeleteNode 5.16
240 TestMultiNode/serial/StopMultiNode 23.87
241 TestMultiNode/serial/RestartMultiNode 44.59
242 TestMultiNode/serial/ValidateNameConflict 23.81
247 TestPreload 126.75
249 TestScheduledStopUnix 100.56
252 TestInsufficientStorage 9.41
253 TestRunningBinaryUpgrade 46.48
255 TestKubernetesUpgrade 321.07
256 TestMissingContainerUpgrade 126.49
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
259 TestNoKubernetes/serial/StartWithK8s 32.94
260 TestNoKubernetes/serial/StartWithStopK8s 8.62
261 TestNoKubernetes/serial/Start 4.97
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
263 TestNoKubernetes/serial/ProfileList 1.79
264 TestNoKubernetes/serial/Stop 1.2
265 TestNoKubernetes/serial/StartNoArgs 7.02
266 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
274 TestNetworkPlugins/group/false 4.39
278 TestStoppedBinaryUpgrade/Setup 2.64
279 TestStoppedBinaryUpgrade/Upgrade 51.22
280 TestStoppedBinaryUpgrade/MinikubeLogs 1.21
289 TestPause/serial/Start 44.18
290 TestNetworkPlugins/group/auto/Start 40.97
291 TestNetworkPlugins/group/kindnet/Start 43.91
292 TestPause/serial/SecondStartNoReconfiguration 6.36
293 TestNetworkPlugins/group/auto/KubeletFlags 0.3
294 TestNetworkPlugins/group/auto/NetCatPod 8.21
295 TestPause/serial/Pause 0.79
296 TestPause/serial/VerifyStatus 0.36
297 TestPause/serial/Unpause 0.73
298 TestPause/serial/PauseAgain 0.78
299 TestPause/serial/DeletePaused 2.83
300 TestNetworkPlugins/group/auto/DNS 0.15
301 TestNetworkPlugins/group/auto/Localhost 0.11
302 TestNetworkPlugins/group/auto/HairPin 0.12
303 TestPause/serial/VerifyDeletedResources 15.61
305 TestNetworkPlugins/group/custom-flannel/Start 45.61
306 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
307 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
308 TestNetworkPlugins/group/kindnet/NetCatPod 9.18
309 TestNetworkPlugins/group/kindnet/DNS 0.15
310 TestNetworkPlugins/group/kindnet/Localhost 0.12
311 TestNetworkPlugins/group/kindnet/HairPin 0.11
312 TestNetworkPlugins/group/enable-default-cni/Start 34.07
313 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
314 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.18
315 TestNetworkPlugins/group/custom-flannel/DNS 0.15
316 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
317 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
318 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
319 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.22
320 TestNetworkPlugins/group/flannel/Start 46.1
321 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
322 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
323 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
324 TestNetworkPlugins/group/bridge/Start 64.49
325 TestNetworkPlugins/group/flannel/ControllerPod 6.01
326 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
327 TestNetworkPlugins/group/flannel/NetCatPod 8.19
329 TestStartStop/group/old-k8s-version/serial/FirstStart 51.83
330 TestNetworkPlugins/group/flannel/DNS 0.18
331 TestNetworkPlugins/group/flannel/Localhost 0.14
332 TestNetworkPlugins/group/flannel/HairPin 0.14
334 TestStartStop/group/embed-certs/serial/FirstStart 40.6
335 TestNetworkPlugins/group/bridge/KubeletFlags 0.36
336 TestNetworkPlugins/group/bridge/NetCatPod 9.24
337 TestNetworkPlugins/group/bridge/DNS 0.14
338 TestNetworkPlugins/group/bridge/Localhost 0.13
339 TestNetworkPlugins/group/bridge/HairPin 0.12
340 TestStartStop/group/old-k8s-version/serial/DeployApp 10.28
341 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.02
343 TestStartStop/group/no-preload/serial/FirstStart 64.2
344 TestStartStop/group/old-k8s-version/serial/Stop 12.81
345 TestStartStop/group/embed-certs/serial/DeployApp 11.29
346 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
347 TestStartStop/group/old-k8s-version/serial/SecondStart 43.56
348 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.85
349 TestStartStop/group/embed-certs/serial/Stop 11.98
350 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
351 TestStartStop/group/embed-certs/serial/SecondStart 44.89
353 TestStartStop/group/no-preload/serial/DeployApp 9.24
354 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.89
356 TestStartStop/group/no-preload/serial/Stop 11.93
357 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
358 TestStartStop/group/no-preload/serial/SecondStart 47.9
364 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 43.06
365 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.25
366 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.83
367 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.95
368 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
369 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 45.71
371 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
372 TestStartStop/group/old-k8s-version/serial/Pause 2.64
374 TestStartStop/group/newest-cni/serial/FirstStart 27.17
375 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
376 TestStartStop/group/embed-certs/serial/Pause 2.76
377 TestStartStop/group/newest-cni/serial/DeployApp 0
378 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.76
379 TestStartStop/group/newest-cni/serial/Stop 1.22
380 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
381 TestStartStop/group/newest-cni/serial/SecondStart 10.51
382 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
383 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
384 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
385 TestStartStop/group/newest-cni/serial/Pause 2.62
386 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
387 TestStartStop/group/no-preload/serial/Pause 2.61
389 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
390 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.67
x
+
TestDownloadOnly/v1.28.0/json-events (13.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-659177 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-659177 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (13.632797481s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (13.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0929 12:17:29.087051 1101494 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I0929 12:17:29.087192 1101494 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-659177
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-659177: exit status 85 (60.518976ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-659177 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-659177 │ jenkins │ v1.37.0 │ 29 Sep 25 12:17 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 12:17:15
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 12:17:15.497091 1101505 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:17:15.497333 1101505 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:17:15.497341 1101505 out.go:374] Setting ErrFile to fd 2...
	I0929 12:17:15.497345 1101505 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:17:15.497579 1101505 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1097891/.minikube/bin
	W0929 12:17:15.497717 1101505 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21652-1097891/.minikube/config/config.json: open /home/jenkins/minikube-integration/21652-1097891/.minikube/config/config.json: no such file or directory
	I0929 12:17:15.498209 1101505 out.go:368] Setting JSON to true
	I0929 12:17:15.499261 1101505 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":17972,"bootTime":1759130263,"procs":282,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 12:17:15.499347 1101505 start.go:140] virtualization: kvm guest
	I0929 12:17:15.501397 1101505 out.go:99] [download-only-659177] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W0929 12:17:15.501514 1101505 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/preloaded-tarball: no such file or directory
	I0929 12:17:15.501560 1101505 notify.go:220] Checking for updates...
	I0929 12:17:15.502543 1101505 out.go:171] MINIKUBE_LOCATION=21652
	I0929 12:17:15.503684 1101505 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:17:15.504773 1101505 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 12:17:15.505796 1101505 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1097891/.minikube
	I0929 12:17:15.506872 1101505 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0929 12:17:15.508756 1101505 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0929 12:17:15.509021 1101505 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:17:15.532375 1101505 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 12:17:15.532489 1101505 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:17:15.586629 1101505 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-09-29 12:17:15.576977676 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:17:15.586741 1101505 docker.go:318] overlay module found
	I0929 12:17:15.588166 1101505 out.go:99] Using the docker driver based on user configuration
	I0929 12:17:15.588203 1101505 start.go:304] selected driver: docker
	I0929 12:17:15.588212 1101505 start.go:924] validating driver "docker" against <nil>
	I0929 12:17:15.588301 1101505 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:17:15.642511 1101505 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-09-29 12:17:15.632647226 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:17:15.642691 1101505 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 12:17:15.643437 1101505 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0929 12:17:15.643656 1101505 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0929 12:17:15.645277 1101505 out.go:171] Using Docker driver with root privileges
	I0929 12:17:15.646251 1101505 cni.go:84] Creating CNI manager for ""
	I0929 12:17:15.646322 1101505 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0929 12:17:15.646339 1101505 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0929 12:17:15.646407 1101505 start.go:348] cluster config:
	{Name:download-only-659177 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-659177 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:17:15.647489 1101505 out.go:99] Starting "download-only-659177" primary control-plane node in "download-only-659177" cluster
	I0929 12:17:15.647506 1101505 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0929 12:17:15.648454 1101505 out.go:99] Pulling base image v0.0.48 ...
	I0929 12:17:15.648478 1101505 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I0929 12:17:15.648603 1101505 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 12:17:15.665041 1101505 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 12:17:15.665239 1101505 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 12:17:15.665331 1101505 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 12:17:15.756029 1101505 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I0929 12:17:15.756093 1101505 cache.go:58] Caching tarball of preloaded images
	I0929 12:17:15.756325 1101505 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I0929 12:17:15.758049 1101505 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0929 12:17:15.758072 1101505 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 ...
	I0929 12:17:15.870172 1101505 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2746dfda401436a5341e0500068bf339 -> /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-659177 host does not exist
	  To start a cluster, run: "minikube start -p download-only-659177"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-659177
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (13.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-952083 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-952083 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (13.693256742s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (13.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0929 12:17:43.179744 1101494 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
I0929 12:17:43.179805 1101494 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-952083
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-952083: exit status 85 (59.756404ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-659177 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-659177 │ jenkins │ v1.37.0 │ 29 Sep 25 12:17 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 29 Sep 25 12:17 UTC │ 29 Sep 25 12:17 UTC │
	│ delete  │ -p download-only-659177                                                                                                                                                               │ download-only-659177 │ jenkins │ v1.37.0 │ 29 Sep 25 12:17 UTC │ 29 Sep 25 12:17 UTC │
	│ start   │ -o=json --download-only -p download-only-952083 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-952083 │ jenkins │ v1.37.0 │ 29 Sep 25 12:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 12:17:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 12:17:29.527341 1101872 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:17:29.527643 1101872 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:17:29.527655 1101872 out.go:374] Setting ErrFile to fd 2...
	I0929 12:17:29.527661 1101872 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:17:29.527883 1101872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1097891/.minikube/bin
	I0929 12:17:29.528412 1101872 out.go:368] Setting JSON to true
	I0929 12:17:29.529469 1101872 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":17986,"bootTime":1759130263,"procs":252,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 12:17:29.529580 1101872 start.go:140] virtualization: kvm guest
	I0929 12:17:29.531351 1101872 out.go:99] [download-only-952083] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 12:17:29.531507 1101872 notify.go:220] Checking for updates...
	I0929 12:17:29.532707 1101872 out.go:171] MINIKUBE_LOCATION=21652
	I0929 12:17:29.533923 1101872 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:17:29.535240 1101872 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 12:17:29.536348 1101872 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1097891/.minikube
	I0929 12:17:29.537488 1101872 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0929 12:17:29.539496 1101872 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0929 12:17:29.539788 1101872 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:17:29.563210 1101872 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 12:17:29.563302 1101872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:17:29.616712 1101872 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 12:17:29.607004729 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:17:29.616824 1101872 docker.go:318] overlay module found
	I0929 12:17:29.618278 1101872 out.go:99] Using the docker driver based on user configuration
	I0929 12:17:29.618319 1101872 start.go:304] selected driver: docker
	I0929 12:17:29.618331 1101872 start.go:924] validating driver "docker" against <nil>
	I0929 12:17:29.618416 1101872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:17:29.674120 1101872 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 12:17:29.66462433 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:17:29.674277 1101872 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 12:17:29.674770 1101872 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0929 12:17:29.674916 1101872 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0929 12:17:29.676459 1101872 out.go:171] Using Docker driver with root privileges
	I0929 12:17:29.677380 1101872 cni.go:84] Creating CNI manager for ""
	I0929 12:17:29.677433 1101872 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0929 12:17:29.677446 1101872 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0929 12:17:29.677512 1101872 start.go:348] cluster config:
	{Name:download-only-952083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-952083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:17:29.678549 1101872 out.go:99] Starting "download-only-952083" primary control-plane node in "download-only-952083" cluster
	I0929 12:17:29.678567 1101872 cache.go:123] Beginning downloading kic base image for docker with containerd
	I0929 12:17:29.679535 1101872 out.go:99] Pulling base image v0.0.48 ...
	I0929 12:17:29.679555 1101872 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0929 12:17:29.679602 1101872 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 12:17:29.695863 1101872 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 12:17:29.696003 1101872 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 12:17:29.696021 1101872 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0929 12:17:29.696026 1101872 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0929 12:17:29.696046 1101872 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0929 12:17:29.784780 1101872 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	I0929 12:17:29.784820 1101872 cache.go:58] Caching tarball of preloaded images
	I0929 12:17:29.785044 1101872 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
	I0929 12:17:29.786634 1101872 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0929 12:17:29.786663 1101872 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4 ...
	I0929 12:17:29.902450 1101872 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2b7b36e7513c2e517ecf49b6f3ce02cf -> /home/jenkins/minikube-integration/21652-1097891/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-952083 host does not exist
	  To start a cluster, run: "minikube start -p download-only-952083"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-952083
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.16s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-257922 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-257922" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-257922
--- PASS: TestDownloadOnlyKic (1.16s)

                                                
                                    
x
+
TestBinaryMirror (0.79s)

                                                
                                                
=== RUN   TestBinaryMirror
I0929 12:17:44.990697 1101494 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-052185 --alsologtostderr --binary-mirror http://127.0.0.1:46837 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-052185" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-052185
--- PASS: TestBinaryMirror (0.79s)

                                                
                                    
x
+
TestOffline (58.98s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-036074 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-036074 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (56.483997365s)
helpers_test.go:175: Cleaning up "offline-containerd-036074" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-036074
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-036074: (2.493784109s)
--- PASS: TestOffline (58.98s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-752861
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-752861: exit status 85 (49.623706ms)

                                                
                                                
-- stdout --
	* Profile "addons-752861" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-752861"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-752861
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-752861: exit status 85 (50.140248ms)

                                                
                                                
-- stdout --
	* Profile "addons-752861" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-752861"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (154.85s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-752861 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-752861 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m34.845341602s)
--- PASS: TestAddons/Setup (154.85s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.66s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 17.200824ms
addons_test.go:876: volcano-admission stabilized in 17.280661ms
addons_test.go:868: volcano-scheduler stabilized in 17.318556ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-799f64f894-lhwrj" [144d8ecf-2b36-4c26-9268-9e64b8b9d298] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004228721s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-589c7dd587-rcf4n" [4aec88cd-3a5f-4095-97b2-92027ff74586] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003470225s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-7dc6969b45-jcwdq" [addc8974-a902-4a15-9aa3-1f2777c86f55] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003759637s
addons_test.go:903: (dbg) Run:  kubectl --context addons-752861 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-752861 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-752861 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [c508edab-b270-4499-b86e-d00247d066a4] Pending
helpers_test.go:352: "test-job-nginx-0" [c508edab-b270-4499-b86e-d00247d066a4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [c508edab-b270-4499-b86e-d00247d066a4] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.004243986s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-752861 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-752861 addons disable volcano --alsologtostderr -v=1: (11.321289503s)
--- PASS: TestAddons/serial/Volcano (39.66s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-752861 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-752861 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.46s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-752861 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-752861 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e8227a93-ed7e-4af4-b062-e54774d43f81] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e8227a93-ed7e-4af4-b062-e54774d43f81] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003538794s
addons_test.go:694: (dbg) Run:  kubectl --context addons-752861 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-752861 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-752861 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.46s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.911428ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-5ws9d" [69da4783-cf07-4f28-a9aa-9811e6bff42d] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.072863245s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-b692z" [451d6659-7cda-4bf8-a7a7-aa2c7bc77a9d] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003673138s
addons_test.go:392: (dbg) Run:  kubectl --context addons-752861 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-752861 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-752861 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.097889197s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-752861 ip
2025/09/29 12:21:37 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-752861 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.03s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.72s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.501628ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-752861
addons_test.go:332: (dbg) Run:  kubectl --context addons-752861 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-752861 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.72s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-752861 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-752861 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-752861 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [af7401d2-ffc2-4470-b97f-ff4b4abd6dc5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [af7401d2-ffc2-4470-b97f-ff4b4abd6dc5] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003806398s
I0929 12:21:46.409888 1101494 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-752861 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-752861 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-752861 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-752861 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-752861 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-752861 addons disable ingress --alsologtostderr -v=1: (7.728139265s)
--- PASS: TestAddons/parallel/Ingress (20.90s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-mzpwf" [ebbea239-7bd9-4e6b-8159-1b86d8f70d7e] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003052469s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-752861 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.72s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.379578ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-4tcbk" [368eae99-ea10-457b-854a-122ce012c3ad] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002362792s
addons_test.go:463: (dbg) Run:  kubectl --context addons-752861 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-752861 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.72s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0929 12:21:31.039097 1101494 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0929 12:21:31.043071 1101494 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0929 12:21:31.043183 1101494 kapi.go:107] duration metric: took 4.108766ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.129584ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-752861 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-752861 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-752861 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-752861 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-752861 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-752861 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-752861 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [0b1152f7-d27c-4572-8213-0a454f2095db] Pending
helpers_test.go:352: "task-pv-pod" [0b1152f7-d27c-4572-8213-0a454f2095db] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [0b1152f7-d27c-4572-8213-0a454f2095db] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004516909s
addons_test.go:572: (dbg) Run:  kubectl --context addons-752861 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-752861 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-752861 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-752861 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-752861 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-752861 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-752861 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-752861 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-752861 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-752861 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-752861 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-752861 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-752861 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-752861 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-752861 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-752861 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-752861 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-752861 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-752861 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-752861 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-752861 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-752861 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [d29e6bed-9259-4c38-89d1-875b97c259d9] Pending
helpers_test.go:352: "task-pv-pod-restore" [d29e6bed-9259-4c38-89d1-875b97c259d9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [d29e6bed-9259-4c38-89d1-875b97c259d9] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003878779s
addons_test.go:614: (dbg) Run:  kubectl --context addons-752861 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-752861 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-752861 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-752861 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-752861 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-752861 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.546825516s)
--- PASS: TestAddons/parallel/CSI (46.60s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-752861 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-wr59m" [14c68502-1c3f-4b52-841e-788c4e2d65e6] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-wr59m" [14c68502-1c3f-4b52-841e-788c4e2d65e6] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003805611s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-752861 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-752861 addons disable headlamp --alsologtostderr -v=1: (5.793388838s)
--- PASS: TestAddons/parallel/Headlamp (18.57s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-qfmmv" [6617a9ed-1e1c-42d3-8af3-86a686d5a3cf] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003653094s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-752861 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.36s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-752861 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-752861 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-752861 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-752861 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-752861 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-752861 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-752861 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-752861 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [f91b0e61-7b23-40d4-afbb-6d061f853ef7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [f91b0e61-7b23-40d4-afbb-6d061f853ef7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [f91b0e61-7b23-40d4-afbb-6d061f853ef7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003301512s
addons_test.go:967: (dbg) Run:  kubectl --context addons-752861 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-752861 ssh "cat /opt/local-path-provisioner/pvc-ffea6f18-99c9-41ff-a6ec-8000ba91e3a7_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-752861 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-752861 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-752861 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.36s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.64s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-zjvl7" [00899f4e-08b9-451d-a504-3dba862c1761] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.077357184s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-752861 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.64s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-6smxr" [91b190f0-a4e5-436e-ab3a-6f8390377139] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004054168s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-752861 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-752861 addons disable yakd --alsologtostderr -v=1: (5.739919484s)
--- PASS: TestAddons/parallel/Yakd (10.74s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-9g4xq" [77b5b2be-b9f1-48cd-8781-0734a5074e73] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003391759s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-752861 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.48s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.61s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-752861
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-752861: (12.352524374s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-752861
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-752861
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-752861
--- PASS: TestAddons/StoppedEnableDisable (12.61s)

                                                
                                    
x
+
TestCertOptions (30.55s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-695888 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-695888 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (26.894620706s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-695888 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-695888 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-695888 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-695888" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-695888
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-695888: (2.948783634s)
--- PASS: TestCertOptions (30.55s)

                                                
                                    
x
+
TestCertExpiration (220.26s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-095959 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-095959 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (32.279584143s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-095959 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-095959 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (5.541991767s)
helpers_test.go:175: Cleaning up "cert-expiration-095959" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-095959
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-095959: (2.435484208s)
--- PASS: TestCertExpiration (220.26s)

                                                
                                    
x
+
TestForceSystemdFlag (24.95s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-364564 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-364564 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (21.83374332s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-364564 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-364564" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-364564
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-364564: (2.834627052s)
--- PASS: TestForceSystemdFlag (24.95s)

                                                
                                    
x
+
TestForceSystemdEnv (33.83s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-082030 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-082030 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (31.302030739s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-082030 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-082030" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-082030
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-082030: (2.131375345s)
--- PASS: TestForceSystemdEnv (33.83s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.05s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0929 13:02:33.776797 1101494 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0929 13:02:33.776989 1101494 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate2037302049/001:/home/jenkins/workspace/Docker_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0929 13:02:33.810404 1101494 install.go:163] /tmp/TestKVMDriverInstallOrUpdate2037302049/001/docker-machine-driver-kvm2 version is 1.1.1
W0929 13:02:33.810449 1101494 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W0929 13:02:33.810557 1101494 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0929 13:02:33.810600 1101494 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2037302049/001/docker-machine-driver-kvm2
I0929 13:02:34.664421 1101494 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate2037302049/001:/home/jenkins/workspace/Docker_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0929 13:02:34.681363 1101494 install.go:163] /tmp/TestKVMDriverInstallOrUpdate2037302049/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (1.05s)

                                                
                                    
x
+
TestErrorSpam/setup (19.92s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-523029 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-523029 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-523029 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-523029 --driver=docker  --container-runtime=containerd: (19.923475486s)
--- PASS: TestErrorSpam/setup (19.92s)

                                                
                                    
x
+
TestErrorSpam/start (0.61s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-523029 --log_dir /tmp/nospam-523029 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-523029 --log_dir /tmp/nospam-523029 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-523029 --log_dir /tmp/nospam-523029 start --dry-run
--- PASS: TestErrorSpam/start (0.61s)

                                                
                                    
x
+
TestErrorSpam/status (0.9s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-523029 --log_dir /tmp/nospam-523029 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-523029 --log_dir /tmp/nospam-523029 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-523029 --log_dir /tmp/nospam-523029 status
--- PASS: TestErrorSpam/status (0.90s)

                                                
                                    
x
+
TestErrorSpam/pause (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-523029 --log_dir /tmp/nospam-523029 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-523029 --log_dir /tmp/nospam-523029 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-523029 --log_dir /tmp/nospam-523029 pause
--- PASS: TestErrorSpam/pause (1.45s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-523029 --log_dir /tmp/nospam-523029 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-523029 --log_dir /tmp/nospam-523029 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-523029 --log_dir /tmp/nospam-523029 unpause
--- PASS: TestErrorSpam/unpause (1.50s)

                                                
                                    
x
+
TestErrorSpam/stop (1.93s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-523029 --log_dir /tmp/nospam-523029 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-523029 --log_dir /tmp/nospam-523029 stop: (1.740523951s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-523029 --log_dir /tmp/nospam-523029 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-523029 --log_dir /tmp/nospam-523029 stop
--- PASS: TestErrorSpam/stop (1.93s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21652-1097891/.minikube/files/etc/test/nested/copy/1101494/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (46.78s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-782022 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-782022 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (46.779616018s)
--- PASS: TestFunctional/serial/StartWithProxy (46.78s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.31s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0929 12:24:38.076535 1101494 config.go:182] Loaded profile config "functional-782022": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-782022 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-782022 --alsologtostderr -v=8: (6.31134299s)
functional_test.go:678: soft start took 6.312289943s for "functional-782022" cluster.
I0929 12:24:44.388416 1101494 config.go:182] Loaded profile config "functional-782022": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (6.31s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-782022 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.88s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.88s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-782022 /tmp/TestFunctionalserialCacheCmdcacheadd_local1051573314/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 cache add minikube-local-cache-test:functional-782022
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-782022 cache add minikube-local-cache-test:functional-782022: (1.750031376s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 cache delete minikube-local-cache-test:functional-782022
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-782022
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-782022 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (286.685356ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 kubectl -- --context functional-782022 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-782022 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.66s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-782022 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0929 12:25:20.683830 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:25:20.691205 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:25:20.702543 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:25:20.724024 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:25:20.765461 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:25:20.846935 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:25:21.008478 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:25:21.330142 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:25:21.972182 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:25:23.253786 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:25:25.816034 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:25:30.937600 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-782022 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.658959534s)
functional_test.go:776: restart took 40.659081059s for "functional-782022" cluster.
I0929 12:25:32.506350 1101494 config.go:182] Loaded profile config "functional-782022": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (40.66s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-782022 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-782022 logs: (1.430509686s)
--- PASS: TestFunctional/serial/LogsCmd (1.43s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 logs --file /tmp/TestFunctionalserialLogsFileCmd2485695683/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-782022 logs --file /tmp/TestFunctionalserialLogsFileCmd2485695683/001/logs.txt: (1.46254757s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.23s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-782022 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-782022
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-782022: exit status 115 (358.429164ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31711 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-782022 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.23s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-782022 config get cpus: exit status 14 (69.517906ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-782022 config get cpus: exit status 14 (49.249659ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-782022 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-782022 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (142.958104ms)

                                                
                                                
-- stdout --
	* [functional-782022] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21652
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21652-1097891/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1097891/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 12:32:01.270675 1152074 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:32:01.270937 1152074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:32:01.270947 1152074 out.go:374] Setting ErrFile to fd 2...
	I0929 12:32:01.270952 1152074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:32:01.271163 1152074 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1097891/.minikube/bin
	I0929 12:32:01.271635 1152074 out.go:368] Setting JSON to false
	I0929 12:32:01.272651 1152074 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":18858,"bootTime":1759130263,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 12:32:01.272748 1152074 start.go:140] virtualization: kvm guest
	I0929 12:32:01.274623 1152074 out.go:179] * [functional-782022] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 12:32:01.275638 1152074 notify.go:220] Checking for updates...
	I0929 12:32:01.275670 1152074 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 12:32:01.276678 1152074 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:32:01.277696 1152074 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 12:32:01.278663 1152074 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1097891/.minikube
	I0929 12:32:01.279560 1152074 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 12:32:01.280475 1152074 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 12:32:01.281857 1152074 config.go:182] Loaded profile config "functional-782022": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 12:32:01.282562 1152074 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:32:01.306456 1152074 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 12:32:01.306539 1152074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:32:01.359943 1152074 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-29 12:32:01.350307422 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:32:01.360083 1152074 docker.go:318] overlay module found
	I0929 12:32:01.361508 1152074 out.go:179] * Using the docker driver based on existing profile
	I0929 12:32:01.362364 1152074 start.go:304] selected driver: docker
	I0929 12:32:01.362377 1152074 start.go:924] validating driver "docker" against &{Name:functional-782022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-782022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:32:01.362465 1152074 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 12:32:01.363900 1152074 out.go:203] 
	W0929 12:32:01.364707 1152074 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0929 12:32:01.365614 1152074 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-782022 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-782022 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-782022 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (147.103789ms)

                                                
                                                
-- stdout --
	* [functional-782022] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21652
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21652-1097891/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1097891/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 12:32:01.623393 1152286 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:32:01.623483 1152286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:32:01.623490 1152286 out.go:374] Setting ErrFile to fd 2...
	I0929 12:32:01.623494 1152286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:32:01.623773 1152286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1097891/.minikube/bin
	I0929 12:32:01.624207 1152286 out.go:368] Setting JSON to false
	I0929 12:32:01.625266 1152286 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":18859,"bootTime":1759130263,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 12:32:01.625351 1152286 start.go:140] virtualization: kvm guest
	I0929 12:32:01.627031 1152286 out.go:179] * [functional-782022] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0929 12:32:01.628165 1152286 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 12:32:01.628190 1152286 notify.go:220] Checking for updates...
	I0929 12:32:01.630582 1152286 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:32:01.631578 1152286 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 12:32:01.632499 1152286 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1097891/.minikube
	I0929 12:32:01.633553 1152286 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 12:32:01.634463 1152286 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 12:32:01.635841 1152286 config.go:182] Loaded profile config "functional-782022": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 12:32:01.636357 1152286 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:32:01.659705 1152286 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 12:32:01.659797 1152286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:32:01.714441 1152286 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-29 12:32:01.703718947 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:32:01.714537 1152286 docker.go:318] overlay module found
	I0929 12:32:01.716020 1152286 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0929 12:32:01.717088 1152286 start.go:304] selected driver: docker
	I0929 12:32:01.717115 1152286 start.go:924] validating driver "docker" against &{Name:functional-782022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-782022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:32:01.717223 1152286 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 12:32:01.718814 1152286 out.go:203] 
	W0929 12:32:01.719743 1152286 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0929 12:32:01.720687 1152286 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh -n functional-782022 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 cp functional-782022:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd894747004/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh -n functional-782022 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh -n functional-782022 "sudo cat /tmp/does/not/exist/cp-test.txt"
E0929 12:25:41.179016 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/CpCmd (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1101494/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh "sudo cat /etc/test/nested/copy/1101494/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1101494.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh "sudo cat /etc/ssl/certs/1101494.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1101494.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh "sudo cat /usr/share/ca-certificates/1101494.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/11014942.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh "sudo cat /etc/ssl/certs/11014942.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/11014942.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh "sudo cat /usr/share/ca-certificates/11014942.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-782022 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-782022 ssh "sudo systemctl is-active docker": exit status 1 (280.539754ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-782022 ssh "sudo systemctl is-active crio": exit status 1 (282.427177ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-782022 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-782022 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-782022 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 1143272: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-782022 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-782022 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-782022 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-782022 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-782022
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-782022
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-782022 image ls --format short --alsologtostderr:
I0929 12:35:45.191103 1155155 out.go:360] Setting OutFile to fd 1 ...
I0929 12:35:45.191400 1155155 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 12:35:45.191411 1155155 out.go:374] Setting ErrFile to fd 2...
I0929 12:35:45.191418 1155155 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 12:35:45.191702 1155155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1097891/.minikube/bin
I0929 12:35:45.192334 1155155 config.go:182] Loaded profile config "functional-782022": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0929 12:35:45.192450 1155155 config.go:182] Loaded profile config "functional-782022": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0929 12:35:45.192851 1155155 cli_runner.go:164] Run: docker container inspect functional-782022 --format={{.State.Status}}
I0929 12:35:45.210576 1155155 ssh_runner.go:195] Run: systemctl --version
I0929 12:35:45.210624 1155155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-782022
I0929 12:35:45.227552 1155155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33276 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/functional-782022/id_rsa Username:docker}
I0929 12:35:45.320378 1155155 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-782022 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:5f1f52 │ 74.3MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.0            │ sha256:46169d │ 17.4MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ docker.io/library/minikube-local-cache-test │ functional-782022  │ sha256:60b813 │ 992B   │
│ registry.k8s.io/kube-controller-manager     │ v1.34.0            │ sha256:a0af72 │ 22.8MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ registry.k8s.io/kube-proxy                  │ v1.34.0            │ sha256:df0860 │ 26MB   │
│ docker.io/kicbase/echo-server               │ functional-782022  │ sha256:9056ab │ 2.37MB │
│ localhost/my-image                          │ functional-782022  │ sha256:6629e2 │ 775kB  │
│ registry.k8s.io/kube-apiserver              │ v1.34.0            │ sha256:90550c │ 27.1MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:52546a │ 22.4MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-782022 image ls --format table --alsologtostderr:
I0929 12:35:48.431117 1156305 out.go:360] Setting OutFile to fd 1 ...
I0929 12:35:48.431407 1156305 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 12:35:48.431420 1156305 out.go:374] Setting ErrFile to fd 2...
I0929 12:35:48.431426 1156305 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 12:35:48.431659 1156305 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1097891/.minikube/bin
I0929 12:35:48.432258 1156305 config.go:182] Loaded profile config "functional-782022": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0929 12:35:48.432372 1156305 config.go:182] Loaded profile config "functional-782022": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0929 12:35:48.432907 1156305 cli_runner.go:164] Run: docker container inspect functional-782022 --format={{.State.Status}}
I0929 12:35:48.452330 1156305 ssh_runner.go:195] Run: systemctl --version
I0929 12:35:48.452387 1156305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-782022
I0929 12:35:48.471984 1156305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33276 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/functional-782022/id_rsa Username:docker}
I0929 12:35:48.568093 1156305 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-782022 image ls --format json --alsologtostderr:
[{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"25963701"},{"id":"sha256:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd3
6753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"17385558"},{"id":"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"22384805"},{"id":"sha256:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":["registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"27066504"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10
.1"],"size":"320448"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:6629e2cffa4dc94356da5fba7f94e12f2a5d44ce7e8738beb6f3294013e00ae6","repoDigests":[],"repoTags":["localhost/my-image:functional-782022"],"size":"774885"},{"id":"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"74311308"},{"id":"sha256:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"22819719"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:lat
est"],"size":"72306"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-782022"],"size":"2372971"},{"id":"sha256:60b8134c07410ddb6aa70aba0e80381df62fc42404bcd1b1c2c28c7b4d1a547a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-782022"],"size":"992"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-782022 image ls --format json --alsologtostderr:
I0929 12:35:48.187378 1156210 out.go:360] Setting OutFile to fd 1 ...
I0929 12:35:48.187810 1156210 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 12:35:48.187826 1156210 out.go:374] Setting ErrFile to fd 2...
I0929 12:35:48.187832 1156210 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 12:35:48.188132 1156210 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1097891/.minikube/bin
I0929 12:35:48.188756 1156210 config.go:182] Loaded profile config "functional-782022": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0929 12:35:48.188850 1156210 config.go:182] Loaded profile config "functional-782022": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0929 12:35:48.189278 1156210 cli_runner.go:164] Run: docker container inspect functional-782022 --format={{.State.Status}}
I0929 12:35:48.209310 1156210 ssh_runner.go:195] Run: systemctl --version
I0929 12:35:48.209372 1156210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-782022
I0929 12:35:48.228557 1156210 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33276 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/functional-782022/id_rsa Username:docker}
I0929 12:35:48.328941 1156210 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-782022 image ls --format yaml --alsologtostderr:
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-782022
size: "2372971"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"
- id: sha256:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "27066504"
- id: sha256:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "22819719"
- id: sha256:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "17385558"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:60b8134c07410ddb6aa70aba0e80381df62fc42404bcd1b1c2c28c7b4d1a547a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-782022
size: "992"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "74311308"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "25963701"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-782022 image ls --format yaml --alsologtostderr:
I0929 12:35:47.957249 1156127 out.go:360] Setting OutFile to fd 1 ...
I0929 12:35:47.957379 1156127 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 12:35:47.957385 1156127 out.go:374] Setting ErrFile to fd 2...
I0929 12:35:47.957399 1156127 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 12:35:47.958022 1156127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1097891/.minikube/bin
I0929 12:35:47.958624 1156127 config.go:182] Loaded profile config "functional-782022": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0929 12:35:47.958717 1156127 config.go:182] Loaded profile config "functional-782022": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0929 12:35:47.959140 1156127 cli_runner.go:164] Run: docker container inspect functional-782022 --format={{.State.Status}}
I0929 12:35:47.976809 1156127 ssh_runner.go:195] Run: systemctl --version
I0929 12:35:47.976855 1156127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-782022
I0929 12:35:47.994024 1156127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33276 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/functional-782022/id_rsa Username:docker}
I0929 12:35:48.088753 1156127 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-782022 ssh pgrep buildkitd: exit status 1 (274.536653ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 image build -t localhost/my-image:functional-782022 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-782022 image build -t localhost/my-image:functional-782022 testdata/build --alsologtostderr: (2.769414623s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-782022 image build -t localhost/my-image:functional-782022 testdata/build --alsologtostderr:
I0929 12:35:45.688369 1155381 out.go:360] Setting OutFile to fd 1 ...
I0929 12:35:45.688686 1155381 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 12:35:45.688696 1155381 out.go:374] Setting ErrFile to fd 2...
I0929 12:35:45.688700 1155381 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 12:35:45.689033 1155381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1097891/.minikube/bin
I0929 12:35:45.689805 1155381 config.go:182] Loaded profile config "functional-782022": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0929 12:35:45.690778 1155381 config.go:182] Loaded profile config "functional-782022": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0929 12:35:45.691377 1155381 cli_runner.go:164] Run: docker container inspect functional-782022 --format={{.State.Status}}
I0929 12:35:45.709188 1155381 ssh_runner.go:195] Run: systemctl --version
I0929 12:35:45.709245 1155381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-782022
I0929 12:35:45.726855 1155381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33276 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/functional-782022/id_rsa Username:docker}
I0929 12:35:45.820795 1155381 build_images.go:161] Building image from path: /tmp/build.3201367102.tar
I0929 12:35:45.820898 1155381 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0929 12:35:45.831856 1155381 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3201367102.tar
I0929 12:35:45.835735 1155381 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3201367102.tar: stat -c "%s %y" /var/lib/minikube/build/build.3201367102.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3201367102.tar': No such file or directory
I0929 12:35:45.835773 1155381 ssh_runner.go:362] scp /tmp/build.3201367102.tar --> /var/lib/minikube/build/build.3201367102.tar (3072 bytes)
I0929 12:35:45.862929 1155381 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3201367102
I0929 12:35:45.872335 1155381 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3201367102 -xf /var/lib/minikube/build/build.3201367102.tar
I0929 12:35:45.881974 1155381 containerd.go:394] Building image: /var/lib/minikube/build/build.3201367102
I0929 12:35:45.882043 1155381 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3201367102 --local dockerfile=/var/lib/minikube/build/build.3201367102 --output type=image,name=localhost/my-image:functional-782022
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:e1fd5107f8f171d0c7b3d4526af344cb87eb6aabacc934f1ea79619cbb5c40f6 done
#8 exporting config sha256:6629e2cffa4dc94356da5fba7f94e12f2a5d44ce7e8738beb6f3294013e00ae6 done
#8 naming to localhost/my-image:functional-782022 done
#8 DONE 0.1s
I0929 12:35:48.378453 1155381 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3201367102 --local dockerfile=/var/lib/minikube/build/build.3201367102 --output type=image,name=localhost/my-image:functional-782022: (2.496371604s)
I0929 12:35:48.378531 1155381 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3201367102
I0929 12:35:48.390930 1155381 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3201367102.tar
I0929 12:35:48.401245 1155381 build_images.go:217] Built localhost/my-image:functional-782022 from /tmp/build.3201367102.tar
I0929 12:35:48.401278 1155381 build_images.go:133] succeeded building to: functional-782022
I0929 12:35:48.401284 1155381 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.95640047s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-782022
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 image load --daemon kicbase/echo-server:functional-782022 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 image load --daemon kicbase/echo-server:functional-782022 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-782022
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 image load --daemon kicbase/echo-server:functional-782022 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 image save kicbase/echo-server:functional-782022 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 image rm kicbase/echo-server:functional-782022 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-782022
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 image save --daemon kicbase/echo-server:functional-782022 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-782022
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "321.22907ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "50.883675ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "319.649243ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "50.814692ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-782022 /tmp/TestFunctionalparallelMountCmdany-port2570031120/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759149110934407752" to /tmp/TestFunctionalparallelMountCmdany-port2570031120/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759149110934407752" to /tmp/TestFunctionalparallelMountCmdany-port2570031120/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759149110934407752" to /tmp/TestFunctionalparallelMountCmdany-port2570031120/001/test-1759149110934407752
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-782022 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (266.195382ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 12:31:51.200898 1101494 retry.go:31] will retry after 582.330484ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 29 12:31 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 29 12:31 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 29 12:31 test-1759149110934407752
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh cat /mount-9p/test-1759149110934407752
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-782022 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [4e7736e3-4402-4517-a4cb-bdf634f253d6] Pending
helpers_test.go:352: "busybox-mount" [4e7736e3-4402-4517-a4cb-bdf634f253d6] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [4e7736e3-4402-4517-a4cb-bdf634f253d6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [4e7736e3-4402-4517-a4cb-bdf634f253d6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003193274s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-782022 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-782022 /tmp/TestFunctionalparallelMountCmdany-port2570031120/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-782022 /tmp/TestFunctionalparallelMountCmdspecific-port3447882230/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-782022 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (271.368086ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 12:31:57.907499 1101494 retry.go:31] will retry after 483.357325ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-782022 /tmp/TestFunctionalparallelMountCmdspecific-port3447882230/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-782022 ssh "sudo umount -f /mount-9p": exit status 1 (266.693342ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-782022 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-782022 /tmp/TestFunctionalparallelMountCmdspecific-port3447882230/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-782022 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3370057904/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-782022 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3370057904/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-782022 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3370057904/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-782022 ssh "findmnt -T" /mount1: exit status 1 (314.191849ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 12:31:59.699682 1101494 retry.go:31] will retry after 716.985489ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-782022 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-782022 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3370057904/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-782022 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3370057904/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-782022 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3370057904/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-782022 service list: (1.687283627s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-782022 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-782022 service list -o json: (1.714336172s)
functional_test.go:1504: Took "1.71446518s" to run "out/minikube-linux-amd64 -p functional-782022 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.71s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-782022
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-782022
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-782022
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (102.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E0929 12:41:43.750135 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-651583 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m41.539065541s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (102.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (47.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-651583 kubectl -- rollout status deployment/busybox: (45.361608803s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 kubectl -- exec busybox-7b57f96db7-4n94k -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 kubectl -- exec busybox-7b57f96db7-7rqt8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 kubectl -- exec busybox-7b57f96db7-gn4q7 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 kubectl -- exec busybox-7b57f96db7-4n94k -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 kubectl -- exec busybox-7b57f96db7-7rqt8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 kubectl -- exec busybox-7b57f96db7-gn4q7 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 kubectl -- exec busybox-7b57f96db7-4n94k -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 kubectl -- exec busybox-7b57f96db7-7rqt8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 kubectl -- exec busybox-7b57f96db7-gn4q7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (47.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 kubectl -- exec busybox-7b57f96db7-4n94k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 kubectl -- exec busybox-7b57f96db7-4n94k -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 kubectl -- exec busybox-7b57f96db7-7rqt8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 kubectl -- exec busybox-7b57f96db7-7rqt8 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 kubectl -- exec busybox-7b57f96db7-gn4q7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 kubectl -- exec busybox-7b57f96db7-gn4q7 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (12.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-651583 node add --alsologtostderr -v 5: (11.678020198s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (12.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-651583 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 cp testdata/cp-test.txt ha-651583:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 cp ha-651583:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3421181201/001/cp-test_ha-651583.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 cp ha-651583:/home/docker/cp-test.txt ha-651583-m02:/home/docker/cp-test_ha-651583_ha-651583-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583-m02 "sudo cat /home/docker/cp-test_ha-651583_ha-651583-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 cp ha-651583:/home/docker/cp-test.txt ha-651583-m03:/home/docker/cp-test_ha-651583_ha-651583-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583-m03 "sudo cat /home/docker/cp-test_ha-651583_ha-651583-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 cp ha-651583:/home/docker/cp-test.txt ha-651583-m04:/home/docker/cp-test_ha-651583_ha-651583-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583-m04 "sudo cat /home/docker/cp-test_ha-651583_ha-651583-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 cp testdata/cp-test.txt ha-651583-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 cp ha-651583-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3421181201/001/cp-test_ha-651583-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 cp ha-651583-m02:/home/docker/cp-test.txt ha-651583:/home/docker/cp-test_ha-651583-m02_ha-651583.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583 "sudo cat /home/docker/cp-test_ha-651583-m02_ha-651583.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 cp ha-651583-m02:/home/docker/cp-test.txt ha-651583-m03:/home/docker/cp-test_ha-651583-m02_ha-651583-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583-m03 "sudo cat /home/docker/cp-test_ha-651583-m02_ha-651583-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 cp ha-651583-m02:/home/docker/cp-test.txt ha-651583-m04:/home/docker/cp-test_ha-651583-m02_ha-651583-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583-m04 "sudo cat /home/docker/cp-test_ha-651583-m02_ha-651583-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 cp testdata/cp-test.txt ha-651583-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 cp ha-651583-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3421181201/001/cp-test_ha-651583-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 cp ha-651583-m03:/home/docker/cp-test.txt ha-651583:/home/docker/cp-test_ha-651583-m03_ha-651583.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583 "sudo cat /home/docker/cp-test_ha-651583-m03_ha-651583.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 cp ha-651583-m03:/home/docker/cp-test.txt ha-651583-m02:/home/docker/cp-test_ha-651583-m03_ha-651583-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583-m02 "sudo cat /home/docker/cp-test_ha-651583-m03_ha-651583-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 cp ha-651583-m03:/home/docker/cp-test.txt ha-651583-m04:/home/docker/cp-test_ha-651583-m03_ha-651583-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583-m04 "sudo cat /home/docker/cp-test_ha-651583-m03_ha-651583-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 cp testdata/cp-test.txt ha-651583-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 cp ha-651583-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3421181201/001/cp-test_ha-651583-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 cp ha-651583-m04:/home/docker/cp-test.txt ha-651583:/home/docker/cp-test_ha-651583-m04_ha-651583.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583 "sudo cat /home/docker/cp-test_ha-651583-m04_ha-651583.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 cp ha-651583-m04:/home/docker/cp-test.txt ha-651583-m02:/home/docker/cp-test_ha-651583-m04_ha-651583-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583-m02 "sudo cat /home/docker/cp-test_ha-651583-m04_ha-651583-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 cp ha-651583-m04:/home/docker/cp-test.txt ha-651583-m03:/home/docker/cp-test_ha-651583-m04_ha-651583-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 ssh -n ha-651583-m03 "sudo cat /home/docker/cp-test_ha-651583-m04_ha-651583-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-651583 node stop m02 --alsologtostderr -v 5: (11.955037101s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-651583 status --alsologtostderr -v 5: exit status 7 (696.798895ms)

                                                
                                                
-- stdout --
	ha-651583
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-651583-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-651583-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-651583-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 12:44:40.982667 1180050 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:44:40.982801 1180050 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:44:40.982814 1180050 out.go:374] Setting ErrFile to fd 2...
	I0929 12:44:40.982818 1180050 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:44:40.983054 1180050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1097891/.minikube/bin
	I0929 12:44:40.983265 1180050 out.go:368] Setting JSON to false
	I0929 12:44:40.983304 1180050 mustload.go:65] Loading cluster: ha-651583
	I0929 12:44:40.983387 1180050 notify.go:220] Checking for updates...
	I0929 12:44:40.983761 1180050 config.go:182] Loaded profile config "ha-651583": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 12:44:40.983785 1180050 status.go:174] checking status of ha-651583 ...
	I0929 12:44:40.984275 1180050 cli_runner.go:164] Run: docker container inspect ha-651583 --format={{.State.Status}}
	I0929 12:44:41.004502 1180050 status.go:371] ha-651583 host status = "Running" (err=<nil>)
	I0929 12:44:41.004558 1180050 host.go:66] Checking if "ha-651583" exists ...
	I0929 12:44:41.005001 1180050 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-651583
	I0929 12:44:41.022901 1180050 host.go:66] Checking if "ha-651583" exists ...
	I0929 12:44:41.023185 1180050 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 12:44:41.023231 1180050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-651583
	I0929 12:44:41.041671 1180050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33281 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/ha-651583/id_rsa Username:docker}
	I0929 12:44:41.136351 1180050 ssh_runner.go:195] Run: systemctl --version
	I0929 12:44:41.141152 1180050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:44:41.153739 1180050 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:44:41.211840 1180050 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 12:44:41.201069663 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:44:41.212498 1180050 kubeconfig.go:125] found "ha-651583" server: "https://192.168.49.254:8443"
	I0929 12:44:41.212536 1180050 api_server.go:166] Checking apiserver status ...
	I0929 12:44:41.212587 1180050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:44:41.227014 1180050 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1497/cgroup
	W0929 12:44:41.239007 1180050 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1497/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 12:44:41.239066 1180050 ssh_runner.go:195] Run: ls
	I0929 12:44:41.242846 1180050 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0929 12:44:41.247858 1180050 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0929 12:44:41.247882 1180050 status.go:463] ha-651583 apiserver status = Running (err=<nil>)
	I0929 12:44:41.247893 1180050 status.go:176] ha-651583 status: &{Name:ha-651583 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 12:44:41.247944 1180050 status.go:174] checking status of ha-651583-m02 ...
	I0929 12:44:41.248233 1180050 cli_runner.go:164] Run: docker container inspect ha-651583-m02 --format={{.State.Status}}
	I0929 12:44:41.266485 1180050 status.go:371] ha-651583-m02 host status = "Stopped" (err=<nil>)
	I0929 12:44:41.266520 1180050 status.go:384] host is not running, skipping remaining checks
	I0929 12:44:41.266532 1180050 status.go:176] ha-651583-m02 status: &{Name:ha-651583-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 12:44:41.266554 1180050 status.go:174] checking status of ha-651583-m03 ...
	I0929 12:44:41.266801 1180050 cli_runner.go:164] Run: docker container inspect ha-651583-m03 --format={{.State.Status}}
	I0929 12:44:41.284869 1180050 status.go:371] ha-651583-m03 host status = "Running" (err=<nil>)
	I0929 12:44:41.284903 1180050 host.go:66] Checking if "ha-651583-m03" exists ...
	I0929 12:44:41.285225 1180050 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-651583-m03
	I0929 12:44:41.303475 1180050 host.go:66] Checking if "ha-651583-m03" exists ...
	I0929 12:44:41.303741 1180050 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 12:44:41.303778 1180050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-651583-m03
	I0929 12:44:41.322288 1180050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33291 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/ha-651583-m03/id_rsa Username:docker}
	I0929 12:44:41.417635 1180050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:44:41.430918 1180050 kubeconfig.go:125] found "ha-651583" server: "https://192.168.49.254:8443"
	I0929 12:44:41.430952 1180050 api_server.go:166] Checking apiserver status ...
	I0929 12:44:41.431025 1180050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:44:41.443134 1180050 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1419/cgroup
	W0929 12:44:41.454403 1180050 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1419/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 12:44:41.454492 1180050 ssh_runner.go:195] Run: ls
	I0929 12:44:41.459396 1180050 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0929 12:44:41.463624 1180050 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0929 12:44:41.463650 1180050 status.go:463] ha-651583-m03 apiserver status = Running (err=<nil>)
	I0929 12:44:41.463659 1180050 status.go:176] ha-651583-m03 status: &{Name:ha-651583-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 12:44:41.463678 1180050 status.go:174] checking status of ha-651583-m04 ...
	I0929 12:44:41.463982 1180050 cli_runner.go:164] Run: docker container inspect ha-651583-m04 --format={{.State.Status}}
	I0929 12:44:41.482060 1180050 status.go:371] ha-651583-m04 host status = "Running" (err=<nil>)
	I0929 12:44:41.482086 1180050 host.go:66] Checking if "ha-651583-m04" exists ...
	I0929 12:44:41.482344 1180050 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-651583-m04
	I0929 12:44:41.500545 1180050 host.go:66] Checking if "ha-651583-m04" exists ...
	I0929 12:44:41.500799 1180050 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 12:44:41.500839 1180050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-651583-m04
	I0929 12:44:41.519301 1180050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33296 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/ha-651583-m04/id_rsa Username:docker}
	I0929 12:44:41.615006 1180050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:44:41.628060 1180050 status.go:176] ha-651583-m04 status: &{Name:ha-651583-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (9.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-651583 node start m02 --alsologtostderr -v 5: (8.263083604s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (9.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (94.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 stop --alsologtostderr -v 5
E0929 12:45:20.683500 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-651583 stop --alsologtostderr -v 5: (36.925366048s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 start --wait true --alsologtostderr -v 5
E0929 12:45:39.708131 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:45:39.714492 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:45:39.725843 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:45:39.747133 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:45:39.788517 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:45:39.870083 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:45:40.031615 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:45:40.353299 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:45:40.995036 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:45:42.276678 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:45:44.838338 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:45:49.959626 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:46:00.201267 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:46:20.683212 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-651583 start --wait true --alsologtostderr -v 5: (57.497570996s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (94.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-651583 node delete m03 --alsologtostderr -v 5: (8.264898773s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 stop --alsologtostderr -v 5
E0929 12:47:01.644634 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-651583 stop --alsologtostderr -v 5: (35.765632458s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-651583 status --alsologtostderr -v 5: exit status 7 (104.499936ms)

                                                
                                                
-- stdout --
	ha-651583
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-651583-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-651583-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 12:47:12.435227 1196342 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:47:12.435331 1196342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:47:12.435339 1196342 out.go:374] Setting ErrFile to fd 2...
	I0929 12:47:12.435343 1196342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:47:12.435559 1196342 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1097891/.minikube/bin
	I0929 12:47:12.435717 1196342 out.go:368] Setting JSON to false
	I0929 12:47:12.435752 1196342 mustload.go:65] Loading cluster: ha-651583
	I0929 12:47:12.435776 1196342 notify.go:220] Checking for updates...
	I0929 12:47:12.436191 1196342 config.go:182] Loaded profile config "ha-651583": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 12:47:12.436215 1196342 status.go:174] checking status of ha-651583 ...
	I0929 12:47:12.436632 1196342 cli_runner.go:164] Run: docker container inspect ha-651583 --format={{.State.Status}}
	I0929 12:47:12.456373 1196342 status.go:371] ha-651583 host status = "Stopped" (err=<nil>)
	I0929 12:47:12.456394 1196342 status.go:384] host is not running, skipping remaining checks
	I0929 12:47:12.456400 1196342 status.go:176] ha-651583 status: &{Name:ha-651583 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 12:47:12.456450 1196342 status.go:174] checking status of ha-651583-m02 ...
	I0929 12:47:12.456719 1196342 cli_runner.go:164] Run: docker container inspect ha-651583-m02 --format={{.State.Status}}
	I0929 12:47:12.473772 1196342 status.go:371] ha-651583-m02 host status = "Stopped" (err=<nil>)
	I0929 12:47:12.473795 1196342 status.go:384] host is not running, skipping remaining checks
	I0929 12:47:12.473803 1196342 status.go:176] ha-651583-m02 status: &{Name:ha-651583-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 12:47:12.473828 1196342 status.go:174] checking status of ha-651583-m04 ...
	I0929 12:47:12.474115 1196342 cli_runner.go:164] Run: docker container inspect ha-651583-m04 --format={{.State.Status}}
	I0929 12:47:12.491595 1196342 status.go:371] ha-651583-m04 host status = "Stopped" (err=<nil>)
	I0929 12:47:12.491623 1196342 status.go:384] host is not running, skipping remaining checks
	I0929 12:47:12.491633 1196342 status.go:176] ha-651583-m04 status: &{Name:ha-651583-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (56.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-651583 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (55.523087427s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (56.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (25.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 node add --control-plane --alsologtostderr -v 5
E0929 12:48:23.568162 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-651583 node add --control-plane --alsologtostderr -v 5: (24.759660794s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-651583 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (25.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.90s)

                                                
                                    
x
+
TestJSONOutput/start/Command (42.39s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-975987 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-975987 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (42.389776554s)
--- PASS: TestJSONOutput/start/Command (42.39s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-975987 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-975987 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.72s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-975987 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-975987 --output=json --user=testUser: (5.715192667s)
--- PASS: TestJSONOutput/stop/Command (5.72s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-095166 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-095166 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (64.008839ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"725def3a-8cde-4c47-92d1-9873c022c715","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-095166] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5c0d09dd-be56-41c2-ac12-9ee82cb96bfc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21652"}}
	{"specversion":"1.0","id":"c9074917-5e06-4974-8f97-d0c0a22810c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f850ba4f-c548-44c2-9819-406ebf2c06da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21652-1097891/kubeconfig"}}
	{"specversion":"1.0","id":"695fe6e2-0e66-41fa-9f0c-e437cbba2c33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1097891/.minikube"}}
	{"specversion":"1.0","id":"227b9de2-3339-4644-b6a3-ff95106ee892","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"abdc95a7-2426-4a45-8d4e-8f4a38d5b5c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5d62ac80-a6df-4f1d-ab4d-70ab28657536","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-095166" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-095166
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (34.92s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-115072 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-115072 --network=: (32.82739864s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-115072" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-115072
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-115072: (2.076051692s)
--- PASS: TestKicCustomNetwork/create_custom_network (34.92s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.02s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-673315 --network=bridge
E0929 12:50:20.684237 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-673315 --network=bridge: (21.050103335s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-673315" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-673315
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-673315: (1.950053697s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.02s)

                                                
                                    
x
+
TestKicExistingNetwork (26.14s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0929 12:50:36.229682 1101494 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0929 12:50:36.248126 1101494 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0929 12:50:36.248213 1101494 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0929 12:50:36.248235 1101494 cli_runner.go:164] Run: docker network inspect existing-network
W0929 12:50:36.268033 1101494 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0929 12:50:36.268069 1101494 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0929 12:50:36.268085 1101494 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0929 12:50:36.268206 1101494 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0929 12:50:36.286903 1101494 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ea048bcecb48 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fe:2d:df:61:03:8a} reservation:<nil>}
I0929 12:50:36.287334 1101494 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001375e20}
I0929 12:50:36.287366 1101494 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0929 12:50:36.287418 1101494 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0929 12:50:36.347974 1101494 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-007179 --network=existing-network
E0929 12:50:39.709735 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-007179 --network=existing-network: (24.018242688s)
helpers_test.go:175: Cleaning up "existing-network-007179" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-007179
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-007179: (1.968864404s)
I0929 12:51:02.354438 1101494 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (26.14s)

                                                
                                    
x
+
TestKicCustomSubnet (24.96s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-788344 --subnet=192.168.60.0/24
E0929 12:51:07.414339 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-788344 --subnet=192.168.60.0/24: (22.822800996s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-788344 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-788344" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-788344
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-788344: (2.116022233s)
--- PASS: TestKicCustomSubnet (24.96s)

                                                
                                    
x
+
TestKicStaticIP (24.83s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-073520 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-073520 --static-ip=192.168.200.200: (22.622493305s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-073520 ip
helpers_test.go:175: Cleaning up "static-ip-073520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-073520
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-073520: (2.072652748s)
--- PASS: TestKicStaticIP (24.83s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (47.23s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-067186 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-067186 --driver=docker  --container-runtime=containerd: (19.114928248s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-080222 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-080222 --driver=docker  --container-runtime=containerd: (22.647491869s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-067186
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-080222
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-080222" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-080222
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-080222: (1.931476855s)
helpers_test.go:175: Cleaning up "first-067186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-067186
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-067186: (2.371339322s)
--- PASS: TestMinikubeProfile (47.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-224548 --memory=3072 --mount-string /tmp/TestMountStartserial535365933/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-224548 --memory=3072 --mount-string /tmp/TestMountStartserial535365933/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.99632585s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-224548 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.38s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-240153 --memory=3072 --mount-string /tmp/TestMountStartserial535365933/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-240153 --memory=3072 --mount-string /tmp/TestMountStartserial535365933/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.374751287s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-240153 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-224548 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-224548 --alsologtostderr -v=5: (1.660399709s)
--- PASS: TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-240153 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-240153
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-240153: (1.190387316s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.16s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-240153
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-240153: (6.161882288s)
--- PASS: TestMountStart/serial/RestartStopped (7.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-240153 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (52.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-211832 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-211832 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (51.751226815s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (52.22s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (16.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-211832 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-211832 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-211832 -- rollout status deployment/busybox: (14.861553837s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-211832 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-211832 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-211832 -- exec busybox-7b57f96db7-4j49d -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-211832 -- exec busybox-7b57f96db7-jwfwd -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-211832 -- exec busybox-7b57f96db7-4j49d -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-211832 -- exec busybox-7b57f96db7-jwfwd -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-211832 -- exec busybox-7b57f96db7-4j49d -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-211832 -- exec busybox-7b57f96db7-jwfwd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (16.31s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-211832 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-211832 -- exec busybox-7b57f96db7-4j49d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-211832 -- exec busybox-7b57f96db7-4j49d -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-211832 -- exec busybox-7b57f96db7-jwfwd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-211832 -- exec busybox-7b57f96db7-jwfwd -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (11.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-211832 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-211832 -v=5 --alsologtostderr: (11.049322986s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (11.71s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-211832 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 cp testdata/cp-test.txt multinode-211832:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 ssh -n multinode-211832 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 cp multinode-211832:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2191335702/001/cp-test_multinode-211832.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 ssh -n multinode-211832 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 cp multinode-211832:/home/docker/cp-test.txt multinode-211832-m02:/home/docker/cp-test_multinode-211832_multinode-211832-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 ssh -n multinode-211832 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 ssh -n multinode-211832-m02 "sudo cat /home/docker/cp-test_multinode-211832_multinode-211832-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 cp multinode-211832:/home/docker/cp-test.txt multinode-211832-m03:/home/docker/cp-test_multinode-211832_multinode-211832-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 ssh -n multinode-211832 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 ssh -n multinode-211832-m03 "sudo cat /home/docker/cp-test_multinode-211832_multinode-211832-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 cp testdata/cp-test.txt multinode-211832-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 ssh -n multinode-211832-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 cp multinode-211832-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2191335702/001/cp-test_multinode-211832-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 ssh -n multinode-211832-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 cp multinode-211832-m02:/home/docker/cp-test.txt multinode-211832:/home/docker/cp-test_multinode-211832-m02_multinode-211832.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 ssh -n multinode-211832-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 ssh -n multinode-211832 "sudo cat /home/docker/cp-test_multinode-211832-m02_multinode-211832.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 cp multinode-211832-m02:/home/docker/cp-test.txt multinode-211832-m03:/home/docker/cp-test_multinode-211832-m02_multinode-211832-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 ssh -n multinode-211832-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 ssh -n multinode-211832-m03 "sudo cat /home/docker/cp-test_multinode-211832-m02_multinode-211832-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 cp testdata/cp-test.txt multinode-211832-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 ssh -n multinode-211832-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 cp multinode-211832-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2191335702/001/cp-test_multinode-211832-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 ssh -n multinode-211832-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 cp multinode-211832-m03:/home/docker/cp-test.txt multinode-211832:/home/docker/cp-test_multinode-211832-m03_multinode-211832.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 ssh -n multinode-211832-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 ssh -n multinode-211832 "sudo cat /home/docker/cp-test_multinode-211832-m03_multinode-211832.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 cp multinode-211832-m03:/home/docker/cp-test.txt multinode-211832-m02:/home/docker/cp-test_multinode-211832-m03_multinode-211832-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 ssh -n multinode-211832-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 ssh -n multinode-211832-m02 "sudo cat /home/docker/cp-test_multinode-211832-m03_multinode-211832-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.58s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-211832 node stop m03: (1.230176724s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-211832 status: exit status 7 (490.422129ms)

                                                
                                                
-- stdout --
	multinode-211832
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-211832-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-211832-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-211832 status --alsologtostderr: exit status 7 (488.578874ms)

                                                
                                                
-- stdout --
	multinode-211832
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-211832-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-211832-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 12:54:37.776265 1259123 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:54:37.776564 1259123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:54:37.776575 1259123 out.go:374] Setting ErrFile to fd 2...
	I0929 12:54:37.776580 1259123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:54:37.776768 1259123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1097891/.minikube/bin
	I0929 12:54:37.776983 1259123 out.go:368] Setting JSON to false
	I0929 12:54:37.777027 1259123 mustload.go:65] Loading cluster: multinode-211832
	I0929 12:54:37.777094 1259123 notify.go:220] Checking for updates...
	I0929 12:54:37.777468 1259123 config.go:182] Loaded profile config "multinode-211832": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 12:54:37.777492 1259123 status.go:174] checking status of multinode-211832 ...
	I0929 12:54:37.777979 1259123 cli_runner.go:164] Run: docker container inspect multinode-211832 --format={{.State.Status}}
	I0929 12:54:37.797335 1259123 status.go:371] multinode-211832 host status = "Running" (err=<nil>)
	I0929 12:54:37.797400 1259123 host.go:66] Checking if "multinode-211832" exists ...
	I0929 12:54:37.797822 1259123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-211832
	I0929 12:54:37.816342 1259123 host.go:66] Checking if "multinode-211832" exists ...
	I0929 12:54:37.816584 1259123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 12:54:37.816623 1259123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-211832
	I0929 12:54:37.835361 1259123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33401 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/multinode-211832/id_rsa Username:docker}
	I0929 12:54:37.928916 1259123 ssh_runner.go:195] Run: systemctl --version
	I0929 12:54:37.933457 1259123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:54:37.945793 1259123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:54:38.003124 1259123 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-29 12:54:37.992183514 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:54:38.003820 1259123 kubeconfig.go:125] found "multinode-211832" server: "https://192.168.67.2:8443"
	I0929 12:54:38.003856 1259123 api_server.go:166] Checking apiserver status ...
	I0929 12:54:38.003912 1259123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:54:38.017004 1259123 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1399/cgroup
	W0929 12:54:38.027839 1259123 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1399/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 12:54:38.027894 1259123 ssh_runner.go:195] Run: ls
	I0929 12:54:38.031527 1259123 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0929 12:54:38.035653 1259123 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0929 12:54:38.035681 1259123 status.go:463] multinode-211832 apiserver status = Running (err=<nil>)
	I0929 12:54:38.035698 1259123 status.go:176] multinode-211832 status: &{Name:multinode-211832 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 12:54:38.035719 1259123 status.go:174] checking status of multinode-211832-m02 ...
	I0929 12:54:38.036074 1259123 cli_runner.go:164] Run: docker container inspect multinode-211832-m02 --format={{.State.Status}}
	I0929 12:54:38.053922 1259123 status.go:371] multinode-211832-m02 host status = "Running" (err=<nil>)
	I0929 12:54:38.053947 1259123 host.go:66] Checking if "multinode-211832-m02" exists ...
	I0929 12:54:38.054284 1259123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-211832-m02
	I0929 12:54:38.072603 1259123 host.go:66] Checking if "multinode-211832-m02" exists ...
	I0929 12:54:38.072879 1259123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 12:54:38.072917 1259123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-211832-m02
	I0929 12:54:38.090530 1259123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33406 SSHKeyPath:/home/jenkins/minikube-integration/21652-1097891/.minikube/machines/multinode-211832-m02/id_rsa Username:docker}
	I0929 12:54:38.184734 1259123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:54:38.196808 1259123 status.go:176] multinode-211832-m02 status: &{Name:multinode-211832-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0929 12:54:38.196852 1259123 status.go:174] checking status of multinode-211832-m03 ...
	I0929 12:54:38.197158 1259123 cli_runner.go:164] Run: docker container inspect multinode-211832-m03 --format={{.State.Status}}
	I0929 12:54:38.214837 1259123 status.go:371] multinode-211832-m03 host status = "Stopped" (err=<nil>)
	I0929 12:54:38.214861 1259123 status.go:384] host is not running, skipping remaining checks
	I0929 12:54:38.214868 1259123 status.go:176] multinode-211832-m03 status: &{Name:multinode-211832-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.21s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (6.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-211832 node start m03 -v=5 --alsologtostderr: (6.277268716s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (6.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (71.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-211832
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-211832
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-211832: (24.841459521s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-211832 --wait=true -v=5 --alsologtostderr
E0929 12:55:20.684309 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:55:39.707448 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-211832 --wait=true -v=5 --alsologtostderr: (46.074182135s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-211832
--- PASS: TestMultiNode/serial/RestartKeepsNodes (71.02s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-211832 node delete m03: (4.578036325s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-211832 stop: (23.698335842s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-211832 status: exit status 7 (84.197087ms)

                                                
                                                
-- stdout --
	multinode-211832
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-211832-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-211832 status --alsologtostderr: exit status 7 (82.830984ms)

                                                
                                                
-- stdout --
	multinode-211832
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-211832-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 12:56:25.216139 1268766 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:56:25.216387 1268766 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:56:25.216395 1268766 out.go:374] Setting ErrFile to fd 2...
	I0929 12:56:25.216399 1268766 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:56:25.216590 1268766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1097891/.minikube/bin
	I0929 12:56:25.216765 1268766 out.go:368] Setting JSON to false
	I0929 12:56:25.216798 1268766 mustload.go:65] Loading cluster: multinode-211832
	I0929 12:56:25.216860 1268766 notify.go:220] Checking for updates...
	I0929 12:56:25.217210 1268766 config.go:182] Loaded profile config "multinode-211832": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 12:56:25.217240 1268766 status.go:174] checking status of multinode-211832 ...
	I0929 12:56:25.217627 1268766 cli_runner.go:164] Run: docker container inspect multinode-211832 --format={{.State.Status}}
	I0929 12:56:25.235052 1268766 status.go:371] multinode-211832 host status = "Stopped" (err=<nil>)
	I0929 12:56:25.235077 1268766 status.go:384] host is not running, skipping remaining checks
	I0929 12:56:25.235084 1268766 status.go:176] multinode-211832 status: &{Name:multinode-211832 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 12:56:25.235131 1268766 status.go:174] checking status of multinode-211832-m02 ...
	I0929 12:56:25.235370 1268766 cli_runner.go:164] Run: docker container inspect multinode-211832-m02 --format={{.State.Status}}
	I0929 12:56:25.251952 1268766 status.go:371] multinode-211832-m02 host status = "Stopped" (err=<nil>)
	I0929 12:56:25.251980 1268766 status.go:384] host is not running, skipping remaining checks
	I0929 12:56:25.251988 1268766 status.go:176] multinode-211832-m02 status: &{Name:multinode-211832-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.87s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (44.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-211832 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-211832 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (44.013489855s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-211832 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (44.59s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-211832
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-211832-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-211832-m02 --driver=docker  --container-runtime=containerd: exit status 14 (61.465265ms)

                                                
                                                
-- stdout --
	* [multinode-211832-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21652
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21652-1097891/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1097891/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-211832-m02' is duplicated with machine name 'multinode-211832-m02' in profile 'multinode-211832'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-211832-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-211832-m03 --driver=docker  --container-runtime=containerd: (21.097997337s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-211832
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-211832: exit status 80 (297.090662ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-211832 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-211832-m03 already exists in multinode-211832-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-211832-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-211832-m03: (2.30019518s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.81s)

                                                
                                    
x
+
TestPreload (126.75s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-494205 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
E0929 12:58:23.752658 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-494205 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (59.859273616s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-494205 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-494205 image pull gcr.io/k8s-minikube/busybox: (2.492923826s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-494205
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-494205: (5.689930444s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-494205 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-494205 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (56.018779588s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-494205 image list
helpers_test.go:175: Cleaning up "test-preload-494205" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-494205
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-494205: (2.467350656s)
--- PASS: TestPreload (126.75s)

                                                
                                    
x
+
TestScheduledStopUnix (100.56s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-049008 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-049008 --memory=3072 --driver=docker  --container-runtime=containerd: (24.598251041s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-049008 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-049008 -n scheduled-stop-049008
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-049008 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0929 13:00:09.595077 1101494 retry.go:31] will retry after 53.842µs: open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/scheduled-stop-049008/pid: no such file or directory
I0929 13:00:09.596261 1101494 retry.go:31] will retry after 86.841µs: open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/scheduled-stop-049008/pid: no such file or directory
I0929 13:00:09.597416 1101494 retry.go:31] will retry after 264.347µs: open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/scheduled-stop-049008/pid: no such file or directory
I0929 13:00:09.598535 1101494 retry.go:31] will retry after 234.239µs: open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/scheduled-stop-049008/pid: no such file or directory
I0929 13:00:09.599667 1101494 retry.go:31] will retry after 389.41µs: open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/scheduled-stop-049008/pid: no such file or directory
I0929 13:00:09.600798 1101494 retry.go:31] will retry after 961.821µs: open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/scheduled-stop-049008/pid: no such file or directory
I0929 13:00:09.601889 1101494 retry.go:31] will retry after 931.558µs: open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/scheduled-stop-049008/pid: no such file or directory
I0929 13:00:09.603016 1101494 retry.go:31] will retry after 2.1629ms: open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/scheduled-stop-049008/pid: no such file or directory
I0929 13:00:09.606235 1101494 retry.go:31] will retry after 3.165245ms: open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/scheduled-stop-049008/pid: no such file or directory
I0929 13:00:09.610454 1101494 retry.go:31] will retry after 3.824749ms: open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/scheduled-stop-049008/pid: no such file or directory
I0929 13:00:09.615088 1101494 retry.go:31] will retry after 4.299286ms: open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/scheduled-stop-049008/pid: no such file or directory
I0929 13:00:09.620328 1101494 retry.go:31] will retry after 4.430283ms: open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/scheduled-stop-049008/pid: no such file or directory
I0929 13:00:09.625680 1101494 retry.go:31] will retry after 18.206126ms: open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/scheduled-stop-049008/pid: no such file or directory
I0929 13:00:09.645003 1101494 retry.go:31] will retry after 14.20679ms: open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/scheduled-stop-049008/pid: no such file or directory
I0929 13:00:09.660264 1101494 retry.go:31] will retry after 41.658721ms: open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/scheduled-stop-049008/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-049008 --cancel-scheduled
E0929 13:00:20.684337 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-049008 -n scheduled-stop-049008
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-049008
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-049008 --schedule 15s
E0929 13:00:39.710188 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-049008
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-049008: exit status 7 (69.746634ms)

                                                
                                                
-- stdout --
	scheduled-stop-049008
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-049008 -n scheduled-stop-049008
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-049008 -n scheduled-stop-049008: exit status 7 (70.627097ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-049008" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-049008
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-049008: (4.541353159s)
--- PASS: TestScheduledStopUnix (100.56s)

                                                
                                    
x
+
TestInsufficientStorage (9.41s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-450275 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-450275 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (6.952036542s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c34c2a21-5958-451f-9f43-d76a8c4c6be1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-450275] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8dcde6dd-5d7c-4455-881c-80403a024750","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21652"}}
	{"specversion":"1.0","id":"1c701c1a-b0d6-4a04-9e9c-72f3cf1f5df7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e6b2dc0d-ecf6-41bc-bcc3-ff1772dc1cad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21652-1097891/kubeconfig"}}
	{"specversion":"1.0","id":"2ded43b9-dfb1-4130-9619-1ce913ffcf97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1097891/.minikube"}}
	{"specversion":"1.0","id":"63860146-60c1-412f-bebc-e76c8ee913a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2f111560-a866-43dc-ba97-d6a2281b1c53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"723440b1-556a-454e-952f-3ea53fba246e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"e9430c6f-4437-4cd3-b562-e3a2b9a71d8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"5ec86d57-60ba-4c69-90bb-f6d4902379fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8817c29a-e6c7-4c3a-8e19-11f498addfd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"3e2fb755-16ab-4d41-b72d-80093aba88ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-450275\" primary control-plane node in \"insufficient-storage-450275\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"942a1479-67e9-46a2-9d8c-24a1fe0bedd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"3141c4d9-9b84-46c5-947a-5952377bd3a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"6653719f-ef85-4c2c-9c18-3192a4233cef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-450275 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-450275 --output=json --layout=cluster: exit status 7 (281.798724ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-450275","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-450275","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0929 13:01:32.345035 1290964 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-450275" does not appear in /home/jenkins/minikube-integration/21652-1097891/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-450275 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-450275 --output=json --layout=cluster: exit status 7 (281.104602ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-450275","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-450275","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0929 13:01:32.627034 1291066 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-450275" does not appear in /home/jenkins/minikube-integration/21652-1097891/kubeconfig
	E0929 13:01:32.639492 1291066 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/insufficient-storage-450275/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-450275" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-450275
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-450275: (1.89230656s)
--- PASS: TestInsufficientStorage (9.41s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (46.48s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2848359595 start -p running-upgrade-064416 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2848359595 start -p running-upgrade-064416 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (19.818359565s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-064416 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-064416 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (20.753323244s)
helpers_test.go:175: Cleaning up "running-upgrade-064416" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-064416
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-064416: (3.347406483s)
--- PASS: TestRunningBinaryUpgrade (46.48s)

                                                
                                    
x
+
TestKubernetesUpgrade (321.07s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-629986 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-629986 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (32.841834193s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-629986
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-629986: (1.229305348s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-629986 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-629986 status --format={{.Host}}: exit status 7 (78.917755ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-629986 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-629986 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m35.252463747s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-629986 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-629986 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-629986 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (73.941649ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-629986] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21652
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21652-1097891/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1097891/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-629986
	    minikube start -p kubernetes-upgrade-629986 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6299862 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-629986 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-629986 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-629986 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (9.236943409s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-629986" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-629986
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-629986: (2.291068308s)
--- PASS: TestKubernetesUpgrade (321.07s)

                                                
                                    
x
+
TestMissingContainerUpgrade (126.49s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2712949647 start -p missing-upgrade-365643 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2712949647 start -p missing-upgrade-365643 --memory=3072 --driver=docker  --container-runtime=containerd: (54.112590929s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-365643
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-365643: (1.513458056s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-365643
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-365643 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-365643 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m6.151839556s)
helpers_test.go:175: Cleaning up "missing-upgrade-365643" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-365643
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-365643: (2.118756485s)
--- PASS: TestMissingContainerUpgrade (126.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-070678 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-070678 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (85.984442ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-070678] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21652
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21652-1097891/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1097891/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (32.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-070678 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0929 13:02:02.776640 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-070678 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (32.606817949s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-070678 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (32.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-070678 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-070678 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (6.313911263s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-070678 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-070678 status -o json: exit status 2 (339.066241ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-070678","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-070678
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-070678: (1.971106456s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-070678 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-070678 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (4.969081624s)
--- PASS: TestNoKubernetes/serial/Start (4.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-070678 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-070678 "sudo systemctl is-active --quiet service kubelet": exit status 1 (281.994742ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-070678
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-070678: (1.195044159s)
--- PASS: TestNoKubernetes/serial/Stop (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-070678 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-070678 --driver=docker  --container-runtime=containerd: (7.017536167s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-070678 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-070678 "sudo systemctl is-active --quiet service kubelet": exit status 1 (278.87192ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-321209 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-321209 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (934.311715ms)

                                                
                                                
-- stdout --
	* [false-321209] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21652
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21652-1097891/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1097891/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 13:02:37.974040 1313368 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:02:37.974384 1313368 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:02:37.974397 1313368 out.go:374] Setting ErrFile to fd 2...
	I0929 13:02:37.974404 1313368 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:02:37.974705 1313368 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1097891/.minikube/bin
	I0929 13:02:37.975339 1313368 out.go:368] Setting JSON to false
	I0929 13:02:37.976550 1313368 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":20695,"bootTime":1759130263,"procs":251,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 13:02:37.976677 1313368 start.go:140] virtualization: kvm guest
	I0929 13:02:38.016238 1313368 out.go:179] * [false-321209] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 13:02:38.028804 1313368 notify.go:220] Checking for updates...
	I0929 13:02:38.028835 1313368 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 13:02:38.036115 1313368 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 13:02:38.057185 1313368 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-1097891/kubeconfig
	I0929 13:02:38.236019 1313368 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1097891/.minikube
	I0929 13:02:38.376422 1313368 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 13:02:38.564220 1313368 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 13:02:38.586843 1313368 config.go:182] Loaded profile config "cert-expiration-095959": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 13:02:38.587010 1313368 config.go:182] Loaded profile config "cert-options-695888": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
	I0929 13:02:38.587159 1313368 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 13:02:38.612858 1313368 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 13:02:38.612998 1313368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:02:38.669233 1313368 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:false NGoroutines:61 SystemTime:2025-09-29 13:02:38.65841562 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 13:02:38.669393 1313368 docker.go:318] overlay module found
	I0929 13:02:38.752783 1313368 out.go:179] * Using the docker driver based on user configuration
	I0929 13:02:38.817023 1313368 start.go:304] selected driver: docker
	I0929 13:02:38.817053 1313368 start.go:924] validating driver "docker" against <nil>
	I0929 13:02:38.817071 1313368 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 13:02:38.818858 1313368 out.go:203] 
	W0929 13:02:38.819651 1313368 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0929 13:02:38.824894 1313368 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-321209 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-321209

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-321209

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-321209

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-321209

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-321209

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-321209

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-321209

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-321209

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-321209

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-321209

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-321209

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-321209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-321209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-321209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-321209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-321209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-321209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-321209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-321209" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-321209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-321209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-321209" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 13:02:05 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-095959
contexts:
- context:
cluster: cert-expiration-095959
extensions:
- extension:
last-update: Mon, 29 Sep 2025 13:02:05 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-095959
name: cert-expiration-095959
current-context: ""
kind: Config
users:
- name: cert-expiration-095959
user:
client-certificate: /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/cert-expiration-095959/client.crt
client-key: /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/cert-expiration-095959/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-321209

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321209"

                                                
                                                
----------------------- debugLogs end: false-321209 [took: 3.273471975s] --------------------------------
helpers_test.go:175: Cleaning up "false-321209" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-321209
--- PASS: TestNetworkPlugins/group/false (4.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (51.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1256699615 start -p stopped-upgrade-803455 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1256699615 start -p stopped-upgrade-803455 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (27.772739523s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1256699615 -p stopped-upgrade-803455 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1256699615 -p stopped-upgrade-803455 stop: (1.772318264s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-803455 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-803455 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (21.672247779s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (51.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-803455
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-803455: (1.213092187s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.21s)

                                                
                                    
x
+
TestPause/serial/Start (44.18s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-973145 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-973145 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (44.179877351s)
--- PASS: TestPause/serial/Start (44.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (40.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-321209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-321209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (40.973739704s)
--- PASS: TestNetworkPlugins/group/auto/Start (40.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (43.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-321209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0929 13:05:20.684144 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-321209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (43.906543554s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (43.91s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.36s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-973145 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-973145 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.343785865s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-321209 "pgrep -a kubelet"
I0929 13:05:28.696705 1101494 config.go:182] Loaded profile config "auto-321209": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-321209 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-p86bl" [30d935a7-ab0c-4856-a597-8bbfe42ba44c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-p86bl" [30d935a7-ab0c-4856-a597-8bbfe42ba44c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004593921s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.21s)

                                                
                                    
x
+
TestPause/serial/Pause (0.79s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-973145 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.79s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-973145 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-973145 --output=json --layout=cluster: exit status 2 (363.775883ms)

                                                
                                                
-- stdout --
	{"Name":"pause-973145","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-973145","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.36s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-973145 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.73s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.78s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-973145 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.78s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.83s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-973145 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-973145 --alsologtostderr -v=5: (2.83384457s)
--- PASS: TestPause/serial/DeletePaused (2.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-321209 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-321209 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-321209 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (15.61s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (15.549259615s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-973145
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-973145: exit status 1 (19.665554ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-973145: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (15.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (45.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-321209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-321209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (45.609086705s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (45.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-2q2vt" [ca75c0a8-a94a-4815-ae5f-0f29f506e59f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004961447s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-321209 "pgrep -a kubelet"
I0929 13:06:04.986550 1101494 config.go:182] Loaded profile config "kindnet-321209": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-321209 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-g8xmf" [cd3f6858-e41d-4ad5-84be-13281a6b2b25] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-g8xmf" [cd3f6858-e41d-4ad5-84be-13281a6b2b25] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.005365797s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-321209 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-321209 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-321209 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (34.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-321209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-321209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (34.068063457s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (34.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-321209 "pgrep -a kubelet"
I0929 13:06:43.581561 1101494 config.go:182] Loaded profile config "custom-flannel-321209": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-321209 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jd45l" [55e2c42d-345a-4ae5-a2a5-800d5607ddfa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jd45l" [55e2c42d-345a-4ae5-a2a5-800d5607ddfa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.002910379s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-321209 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-321209 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-321209 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-321209 "pgrep -a kubelet"
I0929 13:07:08.084591 1101494 config.go:182] Loaded profile config "enable-default-cni-321209": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-321209 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zc6qt" [8045d3b4-8d9e-4fff-93df-24efd6fe4515] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zc6qt" [8045d3b4-8d9e-4fff-93df-24efd6fe4515] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.007225826s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (46.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-321209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-321209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (46.101728024s)
--- PASS: TestNetworkPlugins/group/flannel/Start (46.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-321209 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-321209 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-321209 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (64.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-321209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-321209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m4.490311149s)
--- PASS: TestNetworkPlugins/group/bridge/Start (64.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-2mf7z" [b3eedb85-23fd-483e-9595-cd7b5b168f14] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003709228s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-321209 "pgrep -a kubelet"
I0929 13:08:05.376211 1101494 config.go:182] Loaded profile config "flannel-321209": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-321209 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7426z" [2578112a-1302-442b-8a9c-0f20351371ff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7426z" [2578112a-1302-442b-8a9c-0f20351371ff] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.007194106s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (51.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-495121 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-495121 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (51.831004131s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (51.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-321209 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-321209 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-321209 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (40.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-644246 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-644246 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (40.604214626s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (40.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-321209 "pgrep -a kubelet"
I0929 13:08:40.774562 1101494 config.go:182] Loaded profile config "bridge-321209": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-321209 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-llwwt" [74c8dadb-77f9-4c2b-9d93-f1c14cdfbe1c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-llwwt" [74c8dadb-77f9-4c2b-9d93-f1c14cdfbe1c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004418976s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-321209 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-321209 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-321209 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-495121 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [73c7d5d5-1fcd-42dc-a5af-12922ca5da15] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [73c7d5d5-1fcd-42dc-a5af-12922ca5da15] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003348231s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-495121 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-495121 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-495121 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (64.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-554589 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-554589 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (1m4.202677471s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (64.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-495121 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-495121 --alsologtostderr -v=3: (12.806569961s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-644246 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [dab162e1-2d05-4001-b203-ff01e2b856a9] Pending
helpers_test.go:352: "busybox" [dab162e1-2d05-4001-b203-ff01e2b856a9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [dab162e1-2d05-4001-b203-ff01e2b856a9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.002886457s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-644246 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-495121 -n old-k8s-version-495121
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-495121 -n old-k8s-version-495121: exit status 7 (85.092284ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-495121 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (43.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-495121 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-495121 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (43.235849881s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-495121 -n old-k8s-version-495121
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (43.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-644246 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-644246 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-644246 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-644246 --alsologtostderr -v=3: (11.982766057s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-644246 -n embed-certs-644246
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-644246 -n embed-certs-644246: exit status 7 (71.076408ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-644246 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (44.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-644246 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-644246 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (44.532518308s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-644246 -n embed-certs-644246
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (44.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-554589 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [abc5319e-8dcb-4e75-bce5-ef133a58175b] Pending
helpers_test.go:352: "busybox" [abc5319e-8dcb-4e75-bce5-ef133a58175b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [abc5319e-8dcb-4e75-bce5-ef133a58175b] Running
E0929 13:10:20.684266 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/addons-752861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003741787s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-554589 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-554589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-554589 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-554589 --alsologtostderr -v=3
E0929 13:10:28.892383 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/auto-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:28.898758 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/auto-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:28.910150 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/auto-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:28.931577 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/auto-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:28.973062 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/auto-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:29.054513 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/auto-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:29.216367 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/auto-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:29.538085 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/auto-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:30.179956 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/auto-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:31.461442 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/auto-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:34.022829 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/auto-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-554589 --alsologtostderr -v=3: (11.93270752s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-554589 -n no-preload-554589
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-554589 -n no-preload-554589: exit status 7 (71.684564ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-554589 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (47.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-554589 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
E0929 13:10:39.144858 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/auto-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:39.707128 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/functional-782022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:49.386388 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/auto-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:58.708196 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/kindnet-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:58.714637 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/kindnet-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:58.726061 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/kindnet-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:58.747462 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/kindnet-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:58.788895 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/kindnet-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:58.870318 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/kindnet-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:59.031638 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/kindnet-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:59.353477 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/kindnet-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:59.995186 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/kindnet-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:11:01.277202 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/kindnet-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:11:03.838717 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/kindnet-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:11:08.960304 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/kindnet-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:11:09.868380 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/auto-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:11:19.202469 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/kindnet-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-554589 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (47.592436559s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-554589 -n no-preload-554589
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (47.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-625526 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
E0929 13:21:43.752102 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/custom-flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:22:08.286583 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/enable-default-cni-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-625526 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (43.063881059s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-625526 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c850293b-b927-402a-8d9b-92117f58dc91] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c850293b-b927-402a-8d9b-92117f58dc91] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004018672s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-625526 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-625526 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-625526 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-625526 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-625526 --alsologtostderr -v=3: (11.952932361s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-625526 -n default-k8s-diff-port-625526
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-625526 -n default-k8s-diff-port-625526: exit status 7 (69.26019ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-625526 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-625526 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
E0929 13:22:59.086542 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/flannel-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-625526 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (45.3977876s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-625526 -n default-k8s-diff-port-625526
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-495121 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-495121 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-495121 -n old-k8s-version-495121
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-495121 -n old-k8s-version-495121: exit status 2 (304.719735ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-495121 -n old-k8s-version-495121
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-495121 -n old-k8s-version-495121: exit status 2 (295.845958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-495121 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-495121 -n old-k8s-version-495121
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-495121 -n old-k8s-version-495121
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (27.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-740698 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-740698 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (27.174569146s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (27.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-644246 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-644246 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-644246 -n embed-certs-644246
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-644246 -n embed-certs-644246: exit status 2 (313.325287ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-644246 -n embed-certs-644246
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-644246 -n embed-certs-644246: exit status 2 (303.559368ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-644246 --alsologtostderr -v=1
E0929 13:28:31.350332 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/enable-default-cni-321209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-644246 -n embed-certs-644246
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-644246 -n embed-certs-644246
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-740698 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-740698 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-740698 --alsologtostderr -v=3: (1.221747135s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-740698 -n newest-cni-740698
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-740698 -n newest-cni-740698: exit status 7 (69.455197ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-740698 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-740698 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-740698 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.0: (10.197339575s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-740698 -n newest-cni-740698
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-740698 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-740698 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-740698 -n newest-cni-740698
E0929 13:28:59.139363 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/old-k8s-version-495121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:28:59.145719 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/old-k8s-version-495121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:28:59.157099 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/old-k8s-version-495121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:28:59.178547 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/old-k8s-version-495121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:28:59.219955 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/old-k8s-version-495121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:28:59.302290 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/old-k8s-version-495121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-740698 -n newest-cni-740698: exit status 2 (301.419981ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-740698 -n newest-cni-740698
E0929 13:28:59.463658 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/old-k8s-version-495121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-740698 -n newest-cni-740698: exit status 2 (301.29751ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-740698 --alsologtostderr -v=1
E0929 13:28:59.785453 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/old-k8s-version-495121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-740698 -n newest-cni-740698
E0929 13:29:00.427209 1101494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/old-k8s-version-495121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-740698 -n newest-cni-740698
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-554589 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-554589 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-554589 -n no-preload-554589
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-554589 -n no-preload-554589: exit status 2 (287.518962ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-554589 -n no-preload-554589
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-554589 -n no-preload-554589: exit status 2 (287.360919ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-554589 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-554589 -n no-preload-554589
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-554589 -n no-preload-554589
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-625526 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-625526 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-625526 -n default-k8s-diff-port-625526
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-625526 -n default-k8s-diff-port-625526: exit status 2 (302.310281ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-625526 -n default-k8s-diff-port-625526
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-625526 -n default-k8s-diff-port-625526: exit status 2 (302.208189ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-625526 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-625526 -n default-k8s-diff-port-625526
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-625526 -n default-k8s-diff-port-625526
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.67s)

                                                
                                    

Test skip (25/325)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-321209 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-321209

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-321209

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-321209

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-321209

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-321209

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-321209

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-321209

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-321209

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-321209

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-321209

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-321209

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-321209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-321209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-321209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-321209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-321209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-321209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-321209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-321209" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-321209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-321209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-321209" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 13:02:05 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-095959
contexts:
- context:
cluster: cert-expiration-095959
extensions:
- extension:
last-update: Mon, 29 Sep 2025 13:02:05 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-095959
name: cert-expiration-095959
current-context: ""
kind: Config
users:
- name: cert-expiration-095959
user:
client-certificate: /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/cert-expiration-095959/client.crt
client-key: /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/cert-expiration-095959/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-321209

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321209"

                                                
                                                
----------------------- debugLogs end: kubenet-321209 [took: 3.985311789s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-321209" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-321209
--- SKIP: TestNetworkPlugins/group/kubenet (4.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-321209 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-321209

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-321209

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-321209

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-321209

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-321209

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-321209

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-321209

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-321209

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-321209

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-321209

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-321209

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-321209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-321209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-321209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-321209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-321209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-321209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-321209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-321209" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-321209

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-321209

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-321209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-321209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-321209

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-321209

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-321209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-321209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-321209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-321209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-321209" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21652-1097891/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 13:02:05 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-095959
contexts:
- context:
cluster: cert-expiration-095959
extensions:
- extension:
last-update: Mon, 29 Sep 2025 13:02:05 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-095959
name: cert-expiration-095959
current-context: ""
kind: Config
users:
- name: cert-expiration-095959
user:
client-certificate: /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/cert-expiration-095959/client.crt
client-key: /home/jenkins/minikube-integration/21652-1097891/.minikube/profiles/cert-expiration-095959/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-321209

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-321209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321209"

                                                
                                                
----------------------- debugLogs end: cilium-321209 [took: 3.647229883s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-321209" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-321209
--- SKIP: TestNetworkPlugins/group/cilium (3.83s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-849793" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-849793
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard